Pdf:APRJA Minor Tech: Difference between revisions
No edit summary |
Tag: Rollback |
||
(115 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
<div id="cover"> | __NOTOC__ | ||
<div id="cover"> | |||
<div | <div id="cover-title"> | ||
A Peer-Reviewed Journal About<br><br> | |||
< | MINOR TECH | ||
</div> | </div> | ||
< | <div class="authors"> | ||
Camille Crichlow<br> | |||
Teodora Sinziana Fartan<br> | |||
Susanne Förster<br> | |||
Inte Gloerich<br> | |||
Mara Karagianni<br> | |||
Jung-Ah Kim<br> | |||
Freja Kir<br> | |||
Inga Luchs<br> | |||
Alasdair Milne<br> | |||
Shusha Niederberger<br> | |||
< | Jack Wilson<br> | ||
nate wessalowski<br> | |||
xenodata co-operative (Alexandra Anikina <br> | |||
& Yasemin Keskintepe)<br> | |||
Sandy Di Yu<br> | |||
Christian Ulrik Andersen <br> | Christian Ulrik Andersen <br> | ||
& Geoff Cox (Eds.) | & Geoff Cox (Eds.) | ||
</div> | |||
<div id="aprja-details"> | |||
<div id="aprja-logo"> | |||
[[File:APRJA.svg|frame]] | |||
</div> | </div> | ||
<div id="volume-issue-year"> | |||
Volume 12, Issue 1, 2023<br> | |||
ISSN 2245-7755 | |||
</div> | |||
<div class=" | </div> | ||
</div> | |||
<div id="toc"> | |||
Contents | |||
* '''Christian Ulrik Andersen & Geoff Cox'''<br>[[#Editorial:Toward_a_Minor_Tech|Editorial: Toward a Minor Tech]] | |||
* '''Manetta Berends & Simon Browne'''<br>[[#About wiki-to-print|About wiki-to-print]] | |||
* '''Camille Crichlow'''<br>[[#Scaling_Up,_Scaling_Down: Racialism_in_the_Age_of_Big_Data|Scaling Up, Scaling Down: Racialism in the Age of Big Data]] | |||
* '''Jack Wilson'''<br>[[#Minor_Tech_and_Counter-revolution:Tactics,_Infrastructures,_QAnon|Minor Tech and Counter-revolution: Tactics, Infrastructures, QAnon]] | |||
* '''Teodora Sinziana Fartan'''<br>[[#Rendering_Post-Anthropocentric_Visions:Worlding_As_a_Practice_of_Resistance|Rendering Post-Anthropocentric Visions: Worlding As a Practice of Resistance]] | |||
* '''Jung-Ah Kim'''<br>[[#Weaving_and_Computation:Can_Traditional_Korean_Craft_Teach_Us_Something?|Weaving and Computation: Can Traditional Korean Craft Teach Us Something?]] | |||
* '''Freja Kir'''<br>[[#Glitchy,_Caring,_Tactical:_A_Relational_Study_Between_Artistic_Tactics_and_Minor_Tech|Glitchy, Caring, Tactical: A Relational Study Between Artistic Tactics and Minor Tech]] | |||
* '''xenodata co-operative<br>(Alexandra Anikina, Yasemin Keskintepe)'''<br>[[#Spirit_Tactics:(Techno)magic_as_Epistemic_Practice_in_Media_Arts_and_Resistant_Tech|Spirit Tactics: (Techno)magic as Epistemic Practice in Media Arts and Resistant Tech]] | |||
* '''Alasdair Milne'''<br>[[#Lurking_in_the_Gap_between_Philosophy_of_Mind_and_the_Planetary|Lurking in the Gap between Philosophy of Mind and the Planetary]] | |||
* '''Susanne Förster'''<br>[[#The_Bigger_the_Better?!The_Size_of_Language_Models_and_the_Dispute_over_Alternative_Architectures|The Bigger the Better?! The Size of Language Models and the Dispute over Alternative Architectures]] | |||
* '''Inga Luchs'''<br>[[#AI_for_All?Challenging_the_Democratization_of_Machine_Learning|AI for All? Challenging the Democratization of Machine Learning]] | |||
* '''Sandy Di Yu'''<br>[[#Time_Enclosures_and_the_Scales_of_Optimisation:_From_Imperial_Temporality_to_the_Digital_Milieu|Time Enclosures and the Scales of Optimisation: From Imperial Temporality to the Digital Milieu]] | |||
* '''Inte Gloerich'''<br>[[#Towards_DAOs_of_Difference:_Reading_Blockchain_Through_the_Decolonial_Thought_of_Sylvia_Wynter|Towards DAOs of Difference: Reading Blockchain Through the Decolonial Thought of Sylvia Wynter]] | |||
* '''Shusha Niederberger'''<br>[[#Calling_the_User:Interpellation_and_Narration_of_User_Subjectivity_in_Mastodon_and_Trans*Feminist_Servers|Calling the User: Interpellation and Narration of User Subjectivity in Mastodon and ''Trans*Feminist Servers'']] | |||
* '''nate wessalowski & Mara Karagianni'''<br>[[#From_Feminist_Servers_to_Feminist_Federation|From Feminist Servers to Feminist Federation]] | |||
* [[#Contributors|Contributors]] | |||
</div> | |||
<div id="colophon"> | |||
{{ Toward a Minor Tech:APRJA-Colophon }} | |||
</div> | |||
<div class="item" id="editorial"> | |||
{{ Toward a Minor Tech:APRJA-Editorial }} | |||
</div> | |||
<div class="item" id="wiki-to-print"> | |||
{{ Toward a Minor Tech:APRJA-Wiki-to-print }} | |||
</div> | |||
<div class="item"> | |||
{{ Toward_a_Minor_Tech:CRICHLOW5000 }} | {{ Toward_a_Minor_Tech:CRICHLOW5000 }} | ||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ | {{ Toward a Minor Tech:Wilson5000 }} | ||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ Toward a Minor Tech: | {{ Toward a Minor Tech:Fartan }} | ||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ Toward a Minor Tech:Kim5000 }} | {{ Toward a Minor Tech:Kim5000 }} | ||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ Toward a Minor Tech:Kir5000 }} | {{ Toward a Minor Tech:Kir5000 }} | ||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ Toward a Minor Tech:AnikinaKeskintepe5000 }} | |||
</div> | |||
<div class="item"> | |||
{{ Toward a Minor Tech:Milne5000 }} | |||
</div> | |||
<div class="item"> | |||
{{ Toward_a_Minor_Tech:Foerster5000 }} | |||
</div> | |||
<div class="item"> | |||
{{ Toward a Minor Tech:Luchs5000 }} | {{ Toward a Minor Tech:Luchs5000 }} | ||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ Toward a Minor Tech: | {{ Toward a Minor Tech:Toward a Minor Tech:Yu-5000 }} | ||
</div> | |||
<div class="item"> | |||
{{ Toward a Minor Tech:Gloerich5000 }} | |||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ Toward a Minor Tech:Niederberger5000 }} | {{ Toward a Minor Tech:Niederberger5000 }} | ||
</div> | </div> | ||
<div class=" | <div class="item"> | ||
{{ Toward a Minor Tech: | {{ Toward a Minor Tech:FeministServers5000 }} | ||
</div> | </div> | ||
<div | <div id="contributors"> | ||
{{ Toward a Minor Tech: | {{ Toward a Minor Tech:APRJA-Contributors }} | ||
</div> | </div> |
Latest revision as of 12:14, 5 June 2024
Contents
- Christian Ulrik Andersen & Geoff Cox
Editorial: Toward a Minor Tech - Manetta Berends & Simon Browne
About wiki-to-print - Camille Crichlow
Scaling Up, Scaling Down: Racialism in the Age of Big Data - Jack Wilson
Minor Tech and Counter-revolution: Tactics, Infrastructures, QAnon - Teodora Sinziana Fartan
Rendering Post-Anthropocentric Visions: Worlding As a Practice of Resistance - Jung-Ah Kim
Weaving and Computation: Can Traditional Korean Craft Teach Us Something? - Freja Kir
Glitchy, Caring, Tactical: A Relational Study Between Artistic Tactics and Minor Tech - xenodata co-operative
(Alexandra Anikina, Yasemin Keskintepe)
Spirit Tactics: (Techno)magic as Epistemic Practice in Media Arts and Resistant Tech - Alasdair Milne
Lurking in the Gap between Philosophy of Mind and the Planetary - Susanne Förster
The Bigger the Better?! The Size of Language Models and the Dispute over Alternative Architectures - Inga Luchs
AI for All? Challenging the Democratization of Machine Learning - Sandy Di Yu
Time Enclosures and the Scales of Optimisation: From Imperial Temporality to the Digital Milieu - Inte Gloerich
Towards DAOs of Difference: Reading Blockchain Through the Decolonial Thought of Sylvia Wynter - Shusha Niederberger
Calling the User: Interpellation and Narration of User Subjectivity in Mastodon and Trans*Feminist Servers - nate wessalowski & Mara Karagianni
From Feminist Servers to Feminist Federation - Contributors
A Peer-Reviewed Journal About_
ISSN: 2245-7755
Editors: Christian Ulrik Andersen and Geoff Cox
Published by: Digital Aesthetics Research Centre, Aarhus University
Design: Manetta Berends and Simon Browne (CC)
Fonts: Happy Times at the IKOB by Lucas Le Bihan, AllCon by Simon Browne
CC license: ‘Attribution-NonCommercial-ShareAlike’
Christian Ulrik Andersen
& Geoff Cox
Editorial:
Toward a Minor Tech
Editorial: Toward a Minor Tech
The three characteristics of minor tech are the deterritorialization of technology, the connection of the individual to a political immediacy, and the collective arrangement of its operations. Which amounts to this: that “minor” no longer characterises certain technologies, but describes the revolutionary conditions of any technology within what we call big (or ubiquitous). ––
Deleuze and Guattari, Kafka:Toward a MinorLiteratureTech (18)
This journal issue addresses what we are calling "minor tech" making reference to Gilles Deleuze and Félix Guattari's essay "Kafka: Toward a Minor Literature" (written in 1975). They propose the concept of minor literature as opposed to great or established literature — the use of a major language that subverts it from within. "Becoming-minitorian" in this sense — to use a related concept from A Thousand Plateaus — involves the recognition of particular instances of power and the ability of the repressed minority to gain some degree of autonomy of expression. "Expression must break forms, encourage ruptures and new sproutings", as Deleuze and Guattari put it (28).
A characteristic of minor technologies is that everything in them is politics.
For our purpose, this notion of the minor is a relative position to major (or big) tech. This also partly invokes the issue of scale, the theme of the 2023 edition of transmediale festival. In the call, the organizers state that the festival is an exploration of “how technological scale sets conditions for relations, feelings, democratic processes, and infrastructures.” (https://2023.transmediale.de/). The importance of scale becomes apparent in the massification of images and texts on the internet, and the application of various scalar machine techniques that try to make things comprehensible for human and non-human readers alike; big computing begets big data. However, “we have a problem with scale”, as Anna Lowenhaupt Tsing puts it (37), in its connection to modernist master narratives that organise life on an increasingly globalised scale (the 'bigness' of capitalism). Alternatively, she writes, we need to “notice” the small details and not assume that these need to be scaled up to be effective, as is the orthodoxy of research. In technical fields, not least machine learning, this problem with scale has severe consequences, with ensuing discrimination and environmental damage.
A minor technology is that which a minority constructs within the grammar of technology.
Small tech on the other hand operates at human scale (more peer to peer than server-client) and "stutters and stammers the major" (to use the words of Deleuze and Guattari once more). More pragmatically, as artist-researcher Marloes de Valk puts it in the Damaged Earth Catalog: “Small technology, smallnet and smolnet are associated with communities using alternative network infrastructures, delinking from the commercial Internet.” Further issues that arise from scale question the paradigms of 'big computing'; for instance, the dynamics between big data and small technology, attentive to what Cathy Park Hong calls “minor feelings” (that derive from racial and economic discrimination in society); how to bring together new material and minoritarian cultural assemblages between humans and nonhumans, ecology, and technological infrastructure and systems; or, how this relates to minor practices and collective action. Although, ultimately, notions of big or small become less important, and everything is to be considered political (or micropolitical) if we follow our conceptual trajectory.
As such, this publication sets out to question some of the major ideals of technology and its problems of bigness, extending it to follow the three main characteristics identified in Deleuze and Guattari's essay, namely deterritorialization, political immediacy, and collective value. We would argue that these remain pertinent concepts: as a means to deterritorialize from repressive conceptual, social, affective, linguistic and technical regimes, and transform the conditions through which technology can become a "collective machine of expression" (Deleuze and Guattari 18).
A characteristic of a minor technology is that in it everything takes on a collective value.
Following a process of open exchanges online and a three-day in-person research workshop in London, at London South Bank University and King's College London, this edition of APRJA brings together researchers who think through the potentials of 'the minor', and what we are referring to as minor (or minority) tech. As stated, this is not a problem of scale alone (although many of the contributions take this approach) but of politics – how minorities struggle for autonomy of expression. Together, authors address minor tech through its relation to a range of pressing concerns, exploring: racism in predictive policing technology; QAnon as an assemblage of ‘minor techs’; speculative practices of 'worlding'; parallels between computing and the craft of weaving; artistic tactics in opposition to large-scale digital platforms; attempts to decentre Western epistemologies through spirit tactics and (techno)magic; parallels between planetary-scale computation and a philosophy of mind; problems associated with generative large language models; inflated claims of democratizing machine learning; processes of optimisation and our changing experience of time; connections between DAOs, countercultural blockchain and decoloniality; user subjectivity in Mastodon and the Trans*Feminist Servers project; and the final word is with Trans*Feminist Servers whose practice exemplifies the collective value of minor tech.
A minor technology is an intensive utilisation of technology — it utilises the inner tensions of technology.
This publication (APRJA) further develops short articles that were first written during the workshop at speed, published as a newspaper and distributed at transmediale (the PDF can be downloaded from here). As well as exploring our shared interest and understanding of minor tech, our approach has been to implement these principles in practice. Consequently the publication has been produced using wiki-to-print tools, based on MediaWiki software, Paged Media CSS techniques and the JavaScript library Paged.js, which renders the PDF. In other words, no Adobe products have been used. As such the divisions of labour between writers, editors, designers, software developers have been brought closer together in ways that challenge some of the normative paradigms of research process and publication, in keeping with the applied ethics of minor tech.
— Aarhus/London, June 2023
Acknowledgements
Thanks to all contributors and initial workshop participants, including Roel Roscam Abbing, Mateus Domingos, Edoardo Lomi & Macon Holt, Anna Mladentseva, marthe van dessel (alias: ooooo), mika motskobili (alias: vo ezn), ai carmela netîrk, and to Manetta Berends and Simon Browne (Varia) for their design work and the development of the wiki-to-print platform. Additional thanks to Marloes de Valk, Elena Marchevska, Tung-Hui Hu, who contributed to the events in London, and Daniel Chávez Heras, Gabriel Menotti, Søren Pold, Winnie Soon, and Magda Tyzlik-Carver who supported the workshop as respondents, and finally to the anonymous peer reviewers who helped to sharpen the essays. The workshop and publication were supported by CSNI (London South Bank University), SHAPE Digital Citizenship (Aarhus University), and Graduate School of Arts (Aarhus University).
Works cited:
Deleuze, Gilles, and Félix Guattari. Kafka: Toward a Minor Literature [1975], Translated by Dana Polan. University of Minnesota Press, 1986.
———. A Thousand Plateaus: Capitalism and Schizophrenia. Translated by Brian Massumi. University of Minnesota Press, 1987.
Hong, Cathy Park. Minor Feelings: An Asian American Reckoning. One World, 2020.
Tsing, Anna Lowenhaupt. The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins. Princeton University Press, 2015.
de Valk, Marloes. Damaged Earth Catalogue, 2022, https://damaged.bleu255.com/Small_Technology/. Accessed 22 Jan, 2023.
Manetta Berends & Simon Browne
About wiki-to-print
This journal is made with wiki-to-print, a collective publishing environment based on MediaWiki software[1], Paged Media CSS[2] techniques and the JavaScript library Paged.js[3], which renders a preview of the PDF in the browser. Using wiki-to-print allows us to work shoulder-to-shoulder as collaborative writers, editors, designers, developers, in a non-linear publishing workflow where design and content unfolds at the same time, allowing the one to shape the other.
Following the idea of "boilerplate code" which is written to be reused, we like to think of wiki-to-print as a boilerplate as well, instead of thinking of it as a product, platform or tool. The code that is running in the background is a version of previous wiki-printing instances, including:
- the work on the Diversions[4] publications by Constant[5] and OSP[6]
- the book Volumetric Regimes[7] by Possible Bodies[8] and Manetta Berends[9]
- TITiPI's[10] wiki-to-pdf environments[11] by Martino Morandi
- Hackers and Designers'[12] version wiki2print[13] that was produced for the book Making Matters[14]
So, wiki-to-print/wiki-to-pdf/wiki2print is not standalone, but part of a continuum of projects that see software as something to learn from, adapt, transform and change. The code that is used for making this journal is released as yet another version of this network of connected practices[15].
This wiki-to-print is hosted at CC[16] (creative crowds). While moving from cloud to crowds, CC is a thinking device for us how to hand over ways of working and share a space for publishing experiments with others.
Notes
- ↑ https://www.mediawiki.org
- ↑ https://www.w3.org/TR/css-page-3/
- ↑ https://pagedjs.org
- ↑ https://diversions.constantvzw.org
- ↑ https://constantvzw.org
- ↑ https://osp.kitchen
- ↑ http://data-browser.net/db08.html + https://volumetricregimes.xyz
- ↑ https://possiblebodies.constantvzw.org
- ↑ https://manettaberends.nl
- ↑ http://titipi.org
- ↑ https://titipi.org/wiki/index.php/Wiki-to-pdf
- ↑ https://hackersanddesigners.nl
- ↑ https://github.com/hackersanddesigners/wiki2print
- ↑ https://hackersanddesigners.nl/s/Publishing/p/Making_Matters._A_Vocabulary_of_Collective_Arts
- ↑ https://git.vvvvvvaria.org/CC/wiki-to-print
- ↑ https://cc.vvvvvvaria.org
Camille Crichlow
Scaling Up, Scaling Down:
Racialism in the Age of Big Data
Scaling Up, Scaling Down: Racialism in the Age of Big Data
Abstract
This article explores the shifting perceptual scales of racial epistemology and anti-blackness in predictive policing technology. Following Paul Gilroy, I argue that the historical production of racism and anti-blackness has always been deeply entwined with questions of scale and perception. Where racialisation was once bound to the anatomical scale of the body, Thao Than and Scott Wark’s conceptualisation of “racial formations as data formations” inform insights into the ways in which “race”, or its 21st century successor, is increasingly being produced as a cultivation of post-visual, data-driven abstractions. I build upon analysis of this phenomena in the context of predictive policing, where analytically derived “patrol zones” produce virtual barriers that divide civilian from suspect. Beyond a “garbage in, garbage out” critique, I explore the ways in which predictive policing instils racialisation as an epiphenomenon of data-generated proxies. By way of conclusion, I analyse American Artist’s 21-minute video installation 2015 (2019), which depicts the point of view of a police patrol car equipped with a predictive policing device, to parse the scales upon which algorithmic regimes of racial domination are produced and resisted.
Introduction
2015, a 21-minute video installation shown at American Artist’s 2019 multimedia solo exhibition My Blue Window at the Queen’s Museum in New York City, assumes the point of view of a dashboard surveillance camera positioned on the hood of a police car cruising through Brooklyn’s side streets and motorways. Superimposed on the vehicle’s front windshield, a continuous flow of statistical data registers the frequency of crime between 2015 and the preceding year: “Murder, 2015: 5, 2014: 7. Percent change: -28.6%”. Below a shifting animation of neon pink clouds, the word “forecasting” appears as the sun rises on the freeway. The vehicle suddenly changes course, veering towards an exit guided by a series of blinking ‘hot spots’ identified on the screen’s navigation grid. Over the deafening din of a police siren, the car races towards its analytically derived patrol zone. The movement of the camera slows to a stop on an abandoned street as the words “Crime Deterred” repetitively pulse across the screen. This narrative arc circuitously structures the filmic point of view of a predictive policing device.
In tandem with American Artist’s broader multimedia oeuvre, 2015 similarly operates at historical intersections of race, technology, and knowledge production. Their legal name change to American Artist in 2013 suggests a purposeful play with ambivalence. One that foregrounds the visibility and erasure of black art practice, asserting blackness as descriptive of an American artist, while simultaneously signalling anonymity to evade the surveillant logics of virtual spaces. Across their multimedia works forms of cultural critique stage the relation between blackness and power while addressing histories of network culture. Foregrounding analytic means through which data-processing and algorithms augment and amplify racial violence against black people in predictive policing technology, American Artist’s 2015 interweaves fictional narrative and coded documentary-like footage to construct a unique experimental means to invite rumination on racialised spaces and bodies and their assigned “truths” in our surveillance culture.
As large-scale automated data processing entrenches racial inequalities through processes indiscernible to the human eye, 2015 plays with scale as response. Following Joshua DiCaglio, I invoke scale here as a mechanism of observation that establishes “a reference point for domains of experience and interaction” (3). Relatedly, scale structures the relationship between the body and its abstract signifiers, between identity and its lived outcomes. As sociologist and cultural studies scholar Paul Gilroy observes, race has always been a technology of scale: a tool to define the minute, miniscule, microscopic signifiers of the human against an imagined nonhuman ‘other’. In the 21st century, however, racialisation finds novel lines of emergence in evolving technological formats less constrained by the perceptual and scalar codes of a former racial era. No doubt, residual patterns of racialisation at the scale of the individual body remain entrenched in everyday experience. Here, however, I adopt a different orientation, one that specifically examines the less considered role of data-driven technologies that increasingly inscribe racialisation as a large-scale function of datafication.
Predictive policing technology relies on the accumulation of data to construct zones of suspicion through which racialised bodies are disproportionately rendered hyper-visible and subject to violence (Brayne; Chun). Indeed, predictive analytics range across a wide spectrum of sociality. Health care algorithms employed to predict and rank patient care, favour white patients over black (Obermeyer) and automated welfare eligibility calculations keep the racialised poor from accessing state-funded resources, for example. (Rao; Toos). Relatedly, credit-market algorithms widen already heightened racial inequalities in home ownership (Bhutta et. al). While racial categories are not explicitly coded within the classificatory techniques of analytic technologies, large-scale automated data processing often produce racialising outputs that, at first glance, appear neutral.
Informed by the “creeping” role of prediction and subsequent “zones of suspicion,” I consider how racial epistemology is actively reconstructed and reified within the scalar magnitude of “big data”. This article will focus on racialisation as it is bound up in the historical production of blackness in the American context, though I will touch on the ways in which big data is reframing the categories upon which former racial classifications rest more broadly. Following Paul Gilroy’s historical periodisation of racism as a scalar project of the body that moves simultaneously inwards and downwards towards molecular scales of corporeal visibility, I ask how “big data” now exerts upwards and outwards pressures into a globalised regime of datafication, particularly in the context of predictive policing technology. Drawing from Thao Than and Scott Wark’s conception of racial formations as data formation, that is, “modes of classification that operate through proxies and abstractions and that figure racialized bodies not as single, coherent subjects, but as shifting clusters of data” (1), I explore the stakes and possibilities for dismantling racialism when the body is no longer its singular referent. To do this, I build upon analysis of this phenomena in the context of predictive policing, where analytically derived “patrol zones” produce virtual barriers that that map new categories of human difference through statistical inferences of risk. I conclude by returning to analysis of American Artist’s 2015 as an example of emergent artistic intervention that reframes the scales upon which algorithmic regimes of domination are being produced and resisted.
The scales of Euclidean anatomy
The story of racism, as Paul Gilroy tells it, moves simultaneously inwards and downwards into the contours of the human body. The onset of modernity – defined by early colonial contact with indigenous peoples and the expansion of European empires, the trans-Atlantic slave trade, and the emergence of enlightenment thought – saw the evolution of a thread of natural scientific thinking centered around taxonomical hierarchies of human anatomy. 18th century naturalist Carl Linnaeus’s major classificatory work, Systema Naturae (1735), is widely recognised as the most influential taxonomic method that shaped and informed racist differentiations well into the nineteenth century and beyond. Linnaeus’s framework did not yet mark a turn towards biological hierarchisation of racial types. Nevertheless, it inaugurated a new epoch of race science that would collapse and order human variation into several fixed and rigid phenotypic prototypes. By the onset of the 19th century, the racialised body took on new meaning as the terminology of race slid from a polysemous semantic to a narrower signification of hereditary, biological determinism. In this shift from natural history to the biological sciences, Gilroy notes a change in the “modes and meanings of the visual and the visible”, and thus, the emergence of a new kind of racial scale; what he terms the scale of comparative or Euclidean anatomy (844). This shift in scalar perceptuality is defined by “distinctive ways of looking, enumerating, measuring, dissecting, and evaluating – a trend that could only move further inwards and downwards under the surface of the skin (844). By the middle of the 19th century, for example, the science of physiognomy, phrenology and comparative anatomy had encoded racial hierarchies within the physiological semiotics of skulls, limbs, and bones. By the early 20th century, the eugenics movement pushed the science of racial discourse to ever smaller scopic regimes. Even the microscopic secrets of blood became subject to racial scrutiny through the language of genetics and heredity.
Now twenty years into the 21st century, our perceptual regime has been fundamentally altered by exponential advancements in digital technology. Developments across computational, biological, and analytic sciences produce new forms of perceptual scale, and with it, as Gilroy suggests, open consideration for envisioning the end of race as we know it. Writing in the late 1990’s, Gilroy observed how technical advancements in imaging technologies, such as the nuclear magnetic resonance spectroscope [NMR/MRI], and positron emission tomography [PET], “have remade the relationship between the seeable and the unseen” (846).” By imaging the body in new ways, Gilroy proposes, emergent technologies that allow the body to be viewed on increasingly minute scales “impact upon the ways that embodied humanity is imagined and upon the status of bio-racial differences that vanish at these levels of resolution” (846). This scalar movement ever inwards and downwards became especially evident in the advancements of molecular biology. Between 1990 and 2003, the Human Genome Project mapped the first human genome using new gene sequencing technology. Their study concluded that there is no scientific evidence that supports the idea that racial difference is encoded in our genetic material. Once and for all, or so we thought, biological conceptions of race were disproved as a scientifically valid construct. In this scalar movement beyond Euclidean anatomy, as Gilroy discerns, the body ceases to delimit “the scale upon which assessments of the unity and variation of the species are to be made” (845). In other words, we have departed from the perceptual regime that once overdetermined who could be deemed ‘human’ at the scale of the body.
Rehearsing this argument is not meant to suggest that racism has been eclipsed by innovations in technology, or that racial classifications do not remain persistently visible. Gilroy (“Race and Racism in ‘The Age of Obama’”), along with his critics, make clear that the “normative potency” of biological racism retains rhetorical and semiotic force within contemporary culture. Efforts to resuscitate research into race’s biological basis continue to appear in scientific fields (Saini), while the ongoing deaths of black people at the hands of police, or the increase in violent assaults against East Asian people during the Corona virus pandemic, demonstrate how racism is obstinately fixed within our visual regime. Gilroy suggests, however, that while the perceptual scales of race difference remain entrenched, these expressions of racialism are inherently insecure and can be made to yield, politically and culturally, to alternative visions of non-racialism. To combat the emergent racism of the present, this vision suggests, we must look beyond the perceptual-anatomical scales of race difference that defined the modern episteme. Having “let the old visual signifiers of race go”, Gilroy directs attention to tasks of doing “a better job of countering the racisms, the injustices, that they brought into being if we make a more consistent effort to de-nature and de-ontologize ‘race’ and thereby to disaggregate raciologies” (839).
Attending to these tasks of intervention requires that we keep in mind the myriad ways in which the residual traces left by older racial regimes subtly insinuate the functions of newly emergent “post-visual” technologies. As Alexandra Hall and Louise Amoore observe, the nineteenth century ambition to dissect the body, and thus lay bare its hidden truths, also “reveal a set of violences, tensions, and racial categorizations which may be reconfigured within new technological interventions and epistemological frameworks” (451). Referencing contemporary virtual imaging devices which scan and render the body visible in the theatre of airport security, Hall and Amoore suggest that new ways of visualizing, securitizing, and mapping the body draw upon the old-age racial fantasy of rendering identity fully transparent and knowable through corporeal dissection. While the anatomical scales of racial discourse have not been wholly untethered from the body, the ways in which race, or its 21st century successor is being rendered in new perceptual formats, remains an urgent question.
‘Racial formations as data formations’
Beyond anatomical scales of race discourse, there is a sense that race is being remade not within extant contours of the body’s visibility, but outside corporeal recognition altogether. If the inward direction towards the hidden racial truths of the human body defined the logics and aesthetics of our former racial regime, how might we think about the 21st century avalanche of data and analytic technologies that increasingly determine life chances in an interconnected, yet deeply inequitable world? Can it be said that our current racial regime has reversed racialism’s inward march, now exerting upwards and outwards pressures into a globalised regime of “big data”?
Big data, broadly understood, refers to data that is large in volume, high in velocity, and is provided in a variety of formats from which patterns can be identified and extracted (Laney). “Big”, of course, evokes a sense of scalar magnitude. For data scholar Wolfgang Pietsch, “a data set is ‘big’ if it is large enough to allow for reliable predictions based on inductive methods in a domain comprising complex phenomena”. Thus, data can be considered ‘big’ in so far as it can generate predictive insights that inform knowledge and decision-making. Growing ever more prevalent across major industries such as medical practice (Rothstein), warfare (Berman), criminal justice (Završnik) and politics (Macnish and Galliot), data acquisition and analytics increasingly forms the bedrock of not only the global economy, but domains of human experience.
Big data technologies are often claimed to be more truthful, efficient, and objective compared to the biased and error-prone tendencies of human decision-making. Its critics, however, have shown this assumption to be outrightly false – particularly for people of colour. Safiya Noble’s Algorithms of Oppression highlights cases of algorithmically driven data failures which underscore the ways in which sexism and racism are fundamental to online corporate platforms like Google. Cathy O’Neil’s Weapons of Maths Destruction addresses the myriad ways in which big data analytics tend to disadvantage the poor and people of colour under the auspice of objectivity. Such critiques often approach big data through the lens of bias – either bias embedded in views of the dataset or algorithm creator, or bias ingrained in the data itself. In other words, biased data will subsequently produce biased outcomes – garbage in, garbage out. Demands for inclusion or “unbiased data”, however, often fail to address the racialised dialectic between inside and outside, human and Other. As Ramon Amaro argues, “to merely include a representational object in a computational milieu that has already positioned the white object as the prototypical characteristic catalyses disruption superficially” (53). From this perspective, the racial other is positioned in opposition to the prototypical classification, which is whiteness, and is thus seen as “alientated, fragmented, and lacking in comparison” (Amaro 53). If the end goal is inclusion, Amaro follows, what about a right of refusal to representation? This question is particularly pertinent in a context where inclusion also means exposure to heightened forms of surveillance for racialised communities, particularly in the context of policing (Lee and Chin).
Relatedly, the language of bias, inclusion, and exclusion does not account for the ways in which big data analytics are producing new racial classifications emerging not from data inputs, but within correlative models themselves. As Thao Than and Scott Wark suggest, “the application of inductive techniques to large data sets produces novel classifications. These classifications conceive us in new ways – ways that we ourselves are unable to see” (3). Following Gilroy’s idea that changes in perceptuality led by the technological revolution of the 21st century require a reimagination of race, or a repudiation of it altogether, Than and Wark claim that racialism is no longer solely predicated on visual hierarchies of the body, but rather “emerges as an epiphenomenon of automated algorithmic processes of classifying and sorting operating through proxies and abstractions” (2). This phenomenon is what they term racial formations as data formations. That is, racialisation shaped by the non-visible processing of data-generated proxies. Drawing from examples such as Facebook’s now disabled “ethnic affinity“ function, which classed users by race simply by analysing their behavioural data and proxy indicators, such as language, ‘likes’, and IP address – Than and Wark show “that in the absence of explicit racial categories, computational systems are still able to racialize us” – though this may or may not map onto what one looks like (3). While the datafication of racial formations may deepen already-present inequalities for people of colour, these formations have a much more pernicious function: the transformation of racial category itself.
Can these emergent formations culled from the insights of big data be called ‘race’, or do we need a new kind of language to account for technologically induced shifts in racial perception and scale? Further, are processes of computational induction ‘racialising’ if they are producing novel classifications which often map onto, but are not constrained by previous racial categories? As Achille Mbembe notes, these questions must also be considered in the context of 21st globalisation and the encroachment of neoliberal logics into all facets of life, such that “all events and situations in the world of life can be assigned a market value” (Vogl 152). Our contemporary context of globalised inequality is increasingly predicated on what Mbembe describes as the “universalisation of the black condition”, whereby the racial logics of capture and predication which have shaped the lives of black people from the onset of the transatlantic slave trade, “have now become the norm for, or at least the lot of, all of subaltern humanity” (Mbembe 4). Here, it is not the biological construct of race per se that is activated in the classifying logics of capitalism and emergent technologies, but rather, the production of “categories that render disposable populations disposable to violence” (Lloyd 2). In other words, 21st century racialism is circumscribed by differential relations of human value determined by the global capitalist order. Nonetheless, these new classifications retain the pervasive logic of difference and division, reconfiguring the category of the disentitled, less-than-human Other in new formations. As Mbembe suggests, neither “Blackness nor race has ever been fixed”, but rather reconstitutes itself in new ways (6). In the next section, I turn to predictive policing technology to parse the ways in which data regimes are mapping new terrains upon which racial formations are produced and sustained.
The Problem of Prediction: Data-led policing in the U.S.
Multiple vectors of racialism, both old and new, visual and post-visual, large and small-scale, play out in the optics of predictive policing technology. Predictive policing software operates by analysing vast swaths of criminological data to forecast when and where a crime of a certain nature will take place, or who will commit it. The history of data collection is deeply entwined with the project of policing and criminological knowledge, and further, the production of race itself. As Autumn Womack shows in her analysis of “the racial data revolution” in late nineteenth century America, "data and black life were co-constituted in the service of producing a racial regime” (15). Statistical attempts to measure and track the movements of black populations during this period went hand in hand with sociological and carceral efforts to regulate and control black life as an object of knowledge. Policing was and continues to be central to this disciplinary project. As R. Joshua Scannell powerfully argues, “Policing does not have a “racist history.’ Policing makes race and is inextricable from it. Algorithms cannot ‘code out’ race from American policing because race is a policing technology, just as policing is a bedrock racializing technology” (108). Like data, policing and the production of race difference co-constitute one another. Predictive policing thus cannot be analysed without accounting for entanglements between data, carcerality, and racialism.
Computational methods were integrated into American criminal justice departments beginning in the 1960’s. Incited by America’s “War on Crime”, the densification of urban areas following the Great Migration of African Americans to Northern cities, and the economic fall-out from de-industrialisation, criminologists began using data analytics to identify areas of high-crime incidence from which police patrol zones were constructed. This strategy became known as hot spot criminology. By 1994, the New York City Police Department (NYPD) had integrated CompStat, the first digitised, fully automated data-driven performance measurement system into its everyday operations. CompStat is now employed by nearly every major urban police department in America. Beginning in 2002, the NYPD began using statistical insights digitally generated by CompStat to draw up criminogenic “impact zones” – namely low income, black neighbourhoods – that would be subject to heightened police surveillance. As Brian Jefferson observes, the NYPD’s statistical strategy “was deeply wound up in dividing urban space according to varying levels of policeability” (116). Moreover, impact zones “provided not only a scientific pretext for inundating negatively racialized communities in patrol units but also a rationale for micromanaging them through hyperactive tactics” such as stop-and-frisk searches (Jefferson 117). Between 2005 and 2006, the NYPD conducted 510,000 stops in impact zones – a 500% increase from the year before. The analytically derived "impact zone” can thus be understood as a bordering technology – one that sorts and divides civilian populations from those marked by higher probabilities of risk, and thus suspicion.
Policing has only grown more reliant on insights culled from predictive data models. PredPol – a predictive policing company that was developed out of the Los Angeles Police Department in 2009 – forecasts crimes based on crime history, location, and time of day. HunchLab, the more “holistic” successor of PredPol, not only considers factors like crime history, but uses using machine learning approaches to assign criminogenic weights to data “associated with a variety of crime forecasting models” such as the density of “take-out restaurants, schools, bus stops, bars, zoning regulations, temperature, weather, holidays, and more” (Scannel 117). Here, it is not the omniscience of panoptic vision, or the individualising enactment of power that characterises Hunchlab’s surveillance software, but the punitive accumulation of proxies and abstractions in which “humans as such are incidental to the model and its effects” (Scannel 118). Under these conditions, for example, “criminality increasingly becomes a direct consequence of anthropogenic climate change and ecological crisis” (Scannel 122).
Data-driven policing is often presented as the objective antidote to the failures of human-led policing. However, in a context where black and brown people around the world are historically, and contemporaneously subjected to disproportionate police surveillance, carceral punishment, and state-sponsored violence, input data analysed by predictive algorithms often perpetuates a self-reinforcing cycle through which black communities are circuitously subjected to heightened police presence. As sociologist Sarah Brayne explains, “if historical crime data are used as inputs in a location-based predictive policing algorithm, the algorithm will identify areas with historically higher crime rates as high risk for future crime, officers will be deployed to those areas, and will thus be more likely to detect crimes in those areas, creating a “self- fulfilling statistical prophecy” (109).
Beyond this critical cycle of ‘garbage in, garbage out,’ Than and Wark’s conceptualisation of racial formations as data formations provides insight into the ways in which predictive policing instils racialisation as a post-visual epiphenomenon of data-generated proxies. While the racist outcomes of data-led policing certainly manifest in the lived realities of poor and negatively racialised communities, predictive policing necessarily relies upon data-generated, non-visual proxies of race – postcode, history of contact with the police, geographic tags, distribution of schools or restaurants, weather, and more. Such technologies demonstrate how different valuations of risk that “render disposable populations disposable to violence” are actively produced not merely through historical data, but in the correlative models themselves (Lloyd 2). While these statistically generated “patrol zones” tend to map onto historically racialised communities, this process of racialisation does not necessarily correspond to the visual, or phenotypic signifiers of race. What emerges in these correlative models are novel kinds of classifications that arise from probabilistic inferences of suspicion through which subjects – often racial minorities – are exposed to heightened surveillance and violence. As Jefferson suggests, “modernity’s racial taxonomies are not vanishing through computerization; they have just been imported into data arrays” (6). The question remains, as neighbourhoods and ecologies, and those who dwell within them, are actively transcribed into newly ‘raced’ data formations, what becomes of the body in this post-visual shift?
2015
This provocation returns us to American Artist’s video installation, 2015. From the onset of the work, the camera’s objectivity is consistently brought into question. Gesturing towards the frame as an architectural narrowing of positionality, the constricted, stationary viewpoint of the camera fixed onto the dashboard of the police car positions the viewer within the uncomfortable observatory of the surveillant police apparatus. The window is imaged as an enclosure which frames the disproportionate surveillance of black communities by police. The world view here is captured from a single axis, a singular ideological vantage point, as an already known world of city landscape passes ominously through the window’s frame of vision. The frame’s hyper-selectivity, an enduring object of scrutiny in the field of evidentiary image-making, and visuality more broadly, is always implicated in the politics of what exists beyond its view, thus interrogating the assumed indexicality, or visual truth of the moving image.
The frame’s ambiguous functionality is made palpable when the car pulls over to stop. Over the din of a loud police siren, we hear a car door open and shut as the disembodied police officer climbs out of the car to survey the scene. Never entering the camera’s line of vision, the imagined, diegetic space outside the frame draws attention to the occlusive nature of the recorded seen-and heard. As demonstrated in the countless acquittals of police officer’s guilty of assaulting or killing unarmed black people, even when death occurs within the “frame” of a surveillance camera, dash cam or a civilian bystander, this visual record remains ambiguous and is rarely deemed conclusive. Consider the cases of Eric Garner, Philando Castille, or Rodney King, a black man whose violent assault by a group of LAPD officers in 1991 was recorded by a bystander and later used as evidence in the prosecution of King’s attackers. Despite the clear visual evidence of what took place, it was the Barthesian concept of the “caption” – the contextual text which rationalises or situates an image within a given ontological framework – that led to the officer’s acquittal. As Brian Winston notes, “what was beyond the frame containing “the recorded ‘seen-and-heard’” was (or could be made to seem) crucial. This is always inevitably the case because the frame is, exactly, a “frame” – it is blinkered, excluding, partial, limited” (614). This interrogation of the fallacies of visual “evidence” is a critical armature of 2015’s intervention, one that interrogates the underlying assumptions of visuality and perception in surveillance apparatuses, constructing the frame of the police window not as a source of visible evidence, but that which obfuscates, conceals, or obstructs.
Beyond the visual, other lives of data further complicate the already troubled notion of the visible as a stable category. As Sharon Lin Tay argues, “Questions of looking, tracking, and spying are now secondary to, and should be seen within the context of, network culture and its enabling of new surveillance forms within a technology of control.” (568). In other words, scopic regimes that implicitly inform the surveillance context are increasingly subsumed by the datasphere from which multiple stories and scenes may be spun. “Evidence” no longer relies solely on a visual account of “truth”, but rather on a digital record of traces. American Artist’s representation of predictive policing software and technologies of biometric identification alludes to the scope in which data is literally superimposed onto our own frame of vision. Predicting our locations, consumption habits, political views, credit scores, and criminal intentions, analytic predictive technologies condense potential futures into singular outputs. As the police car follows the erratic route of its predictive policing software on the open road, we are simultaneously made aware of a future which is already foreclosed.
Here, as 2015 so aptly suggests, the life of data exists beyond our frame of view but increasingly determines what occurs within it. Data is the text that literally “captions” our lives and identities. In zones deemed high-risk for crime by analytic algorithms, subjects are no longer considered civilians, but are hailed and interpolated as criminalised suspects through their digital subjectification. As the police car cruises through Brooklyn’s sparsely populated streets and neighbourhoods in the early morning, footage of people going about their daily business morphs into an insidious interrogation of space and mobility. As the work provocatively suggests, predictive policing construct zones of suspicion and non-humanity through which the body is interrogated and brought into question. In identifying the body as “threat” by virtue of its geo-spatial location in a zone wholly constructed by the racializing history of policing data, the racial body is recoded, not as a necessarily phenotypic entity, but as a product of data. American Artist’s 2015 palpably coneys race as lived through data, shaping who, and what comes into the frame of the surveillant apparatus. The unadorned message: race is produced and sustained as a product of data.
Yet, at the same time, the work’s aesthetic intervention interrogates the enduring physiological nature of visual racialism through the coding of the body. As the police car cruises through the highlighted zones of predicted crime, select passers-by are singled out and scanned by a facial recognition device. This visceral reference to biometric identification – reading the body as the site and sign of identity – complicates the claim that the primordial, objectifying force of visual evidence are transcended by neutral seeming post-visual data apparati. Biometric systems of measurement, such as facial templates or fingerprint identification, are inherently tied to older, eighteenth and nineteenth century colonial and ethnographic regimes of physiological classification that aimed to expose a certain truth about the racialized subject through their visual capture. Contemporary biometric technologies, as Simone Browne argues, retain the same systemic logics of their colonial predecessors, “alienating the subject by producing a truth about the racial body and one’s identity (or identities) despite the subject’s claims” (110). It has been repeatedly shown, for example, that facial recognition software demonstrates bias against subjects belonging to specific racial groups, often failing to detect or misclassifying darker-skinned subjects, an event that the biometric industry terms a “failure to enrol” (FTE). Here, blackness is imaged as outside the scope of human recognition, while at the same time, black people are disproportionately subjected to heightened surveillance by global security apparatuses. This disparity shows that while forms of racialisation are increasingly migrating to the terrain of the digital, race still inheres, even if residually, as an epidermal materialisation in the biometric evidencing of the body.
In American Artist’s 2015, extant tension between data and the lived, phenotypic, or embodied constitution of racialism suggests that these two racializing formats interlink and reinforce each other. By evidencing the racial body, on one hand as a product of data, and on the other, an embodied, physiological construction of cultural and scientific ontologies of the Other, American Artist makes visible the contemporary and historical means through which race is lived and produced. By calling into question the visual and digital ways the racial body is made to evidence its being-in-the-world, Artist challenges and disrupts the evidentiary logics of surveillance apparatuses – that being, what Catherine Zimmer describes as the “production of knowledge through visibility” (428). By entangling racializing forms of surveillance within a realist documentary-like coded format, American Artist calls into question what it means to document, record, or survey within the frame of moving images. As data increasingly guides where we go, what we see, and whose bodies come into question, claims on the recorded seen and heard, as well as the digitally networked, must continually be interrogated. In the context of our current democratic crisis, where the volatile distinctions between “fact” and “fiction” have produced a plethora of unstable meanings, American Artist’s artistic 2015 is a prime example of emergent activist political intervention that interrogates the underlying assumption of documentary objectivity in both cinematic and data-driven formats, subverting the racial logics that remain imbricated within visual and post-visual systems of classifications.
Conclusion
This article explores the shifting terrain of racial discourse in the age and scalar magnitude of big data. Drawing from Paul Gilroy’s periodisation of racialism from Euclidian anatomy of the 19th century to the genomic revolution of the 1990’s, I show that race has always been deeply entwined with questions of scale and perception. Gilroy observed that emergent digital technologies afford potentially new ways of seeing the body, and subsequently, conceiving humanity in novel scales detached from the visual. Similar insights inform Than and Wark’s prescient account of racial formations as data formations – the idea that race is increasingly being produced as a cultivation of data-driven proxies and abstractions. In the context of ongoing logics of contemporary race, American Artist’s 2015 returns consideration to the ways in which residual, and emergent characteristics of racialism are embedded in everyday systems of predictive policing technology. Through multimedia intervention, Artist’s video work conveys racialism not as a single, static entity, but as a historical structure that mutates and evolves algorithmically across an ever-shifting geopolitical landscape of capital and power. In this instance, American Artist orchestrates one critical means to grasp racialism’s multiple forms, past and present, visual, and otherwise, towards future modalities and determinations not yet realised.
Works cited
Amaro, Ramon. The Black Technical Object: On Machine Learning and the Aspiration of Black Being. Sternberg Press, 2023.
Amoore, Louise. “Biometric Borders: Governing Mobilities in the War on Terror.” Political geography 25.3 (2006): 336–351.
Berman, Eli et al. Small Wars, Big Data: The Information Revolution in Modern Conflict. Princeton University Press, 2018.
Bhutta, Neil, Aurel Hizmo, and Daniel Ringo. “How Much Does Racial Bias Affect Mortgage Lending? Evidence from Human and Algorithmic Credit Decisions,” Finance and Economics Discussion Series 2022-067. Washington: Board of Governors of the Federal Reserve System, 2022.
Brayne, Sarah. Predict and Surveil: Data, Discretion, and the Future of Policing. Oxford University Press, 2020.
Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Duke University Press.
Chun, Wendy Hui Kyong. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. The MIT Press, 2021.
DiCaglio, Joshua. Scale Theory : a Nondisciplinary Inquiry. University of Minnesota Press, 2021.
Doug Laney. “3D Data Management: Controlling Data Volume, Velocity, and Cariety”, Gartner, File No. 949, 6 February 2001, http://blogs.gartner.com/doug-laney/files/2012/01/ad949-3D-Data-Management-Controlling-Data-Volume-Velocity-and-Variety.pdf.
Gilroy, Paul. “Race Ends Here.” Ethnic and racial studies, vol 21, no. 5, 1998, pp. 838–847.
Gilroy, Paul. “Race and Racism In the Age of Obama”. The Tenth Annual Eccles Centre for American Studies Plenary Lecture given at the British Association for American Studies Annual Conference, 2013.
Jefferson, Brian. Digitize and Punish: Racial Criminalization in the Digital Age. University of Minnesota Press, 2020.
Lloyd, David. Under Representation: The Racial Regime of Aesthetics. New York: Fordham University Press, 2018.
Macnish, Kevin, and Jai Galliott, editors. Big Data and Democracy: Edinburgh University Press, 2020.
Mbembe, Achille. 2013. Critique of Black Reason. Duke University Press.
Melamed, Jodi. 2011. Represent and Destroy: Rationalizing Violence in the New Racial Capitalism . University of Minnesota Press.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018
Obermeyer. Ziad et al. “Dissecting racial bias in an algorithm used to manage the health of populations” Science, 2019, pp. 447-453.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Allen Lane, 2016.
Rothstein, M. “Big Data, Surveillance Capitalism, and Precision Medicine: Challenges for Privacy.” Journal of Law, Medicine & Ethics, vol. 49, no. 4, 2021, pp. 666-676..
Saini, Angela. Superior: the Return of Race Science. Beacon Press, 2020.
Scannel, R. Joshua. “This Is Not Minority Report predictive policing and population racism”. Viral Justice: How We Grow the World We Want, edited by Ruha Benjamin. Princeton University Press, 2022, pp. 106-129.
Than, Thao, and Scott Wark. “Racial formations as data formations.” Big Data & Society, 2021, vol. 8, no. 2, pp. 1-5.
Vogl, Joseph.Joseph Vogl. Le spectre du capital. Diaphanes, 2013.
Winston, Brian. “Surveillance in the Service of Narrative”. A Companion to Contemporary Documentary Film, edited by Alexandra Juhasz and Alisa Lebow. John Wiley & Sons, 2015, pp. 611-628.
Womack, Autumn. The Matter of Black Living: the Aesthetic Experiment of Racial Data, 1880-1930. The University of Chicago Press, 2022.
Završnik, Aleš. “Algorithmic Justice: Algorithms and Big Data in Criminal Justice Settings.” European journal of criminology, vol 18, no. 5, 2021, pp. 623–642.
Jack Wilson
Minor Tech and Counter-revolution:
Tactics, Infrastructures, QAnon
Minor Tech and Counter-revolution: Tactics, Infrastructures, QAnon
Abstract
Following repeated assertions by QAnon promoters that to understand the phenomenon one must ‘do your own research’ this article seeks to unpack how ‘research’ is understood within QAnon, and how this understanding is operationalised in the production of particular tools. Drawing on exemplar literature internal to the phenomenon, it examines discourses on question of QAnon’s epistemology with particular reference to the stated purpose of ‘research’ and its difference to an allegedly hegemonic (or ‘mainstream’) episteme. The article then turns to how these discourses are operationalised in the research tools QAnon.pub and QAgg.news (‘QAgg’). Finally, it concludes by way of a reflection on how QAnon’s aggressively counter-revolutionary strategies and infrastructures can trouble the concept of the ‘minor’ in minor tech.
Introduction
For the large part, the contributions to this issue have discussed instances of ‘minor tech’ that offer creative and necessary inverventions in tactics and infrastructures that are–in their deployment by big tech–exploitative, exclusionary, and often environmentally catastrophic. As such, the impression of minor tech may well be that its ‘small’ or ‘human’ scale necessarily precludes such tactics and technologies’ use in the service of a reactionary political project. Nevertheless, this article argues that QAnon can be understood as an assemblage of ‘minor techs’: small-scale contrarian practices and infrastructures whose very granularity produces the conditions for the aggregation that is known as ‘QAnon’ to occur and mutate from the cryptic missives of one ‘anon’ among many on 4chan’s /pol/ board in late 2017; to–in 2023–a global phenomenon with ominous implications for the question of post-truth’s effects on contemporary cultural and political life (see Rothschild; Sommer, Trust the Plan).
Andersen and Cox open this issue quoting Deleuze and Guattari’s definition of ‘minor literature’ as characterised by “the deterritorialization of language, the connection of the individual to a political immediacy, and the collective arrangement of utterance” (18). They go on to suggest that minor tech’s politics of scale potentially offer an analogous operation with regard to the production of autonomous – potentially revolutionary – spaces for marginalised groups. It is in this sense that this article’s contention regarding QAnon’s being a minor tech arises. Specifically, it is in the injunction to ‘do your own research.’ Among QAnon’s myriad factions the statement is a veritable refrain that characterises involvement in the phenomenon as more than simply believing its conspiratorial worldview, but rather participating in its production by investigating its veracity for oneself and, by implication, arriving at similar conclusions. While there has been some scholarly research into various aspects of QAnon’s participatory culture (de Zeeuw and Gekker; Kir et al.; Marwick and Partin; See), how this is conceptualised and enabled within the phenomenon through minor tech tactics and infrastructures remains comparatively understudied.
This article, accordingly, seeks to unpack how ‘research’ is understood within QAnon, and how this understanding is operationalised in the production of particular tools. Drawing on exemplar literature internal to the phenomenon, it will first examine discourses surrounding the question of QAnon’s inverted epistemology with particular reference to the stated purpose of ‘research’ and its perceived difference to an allegedly hegemonic (or ‘mainstream’) episteme. Following this analysis of QAnon’s internal discourses on the matter of ‘research,’ the discussion will then turn to how these discourses are reflected and enacted in the ‘Q Drop’ aggregators QAnon.pub and QAgg.news (‘QAgg’). Q Drops are QAnon subjects’ term for the ambiguous dispatches made by the eponymous, mysterious figure known as ‘Q’ which form the ur-text of the phenomenon. While there is a certain consistency to the Q Drops insofar as they are concerned with the actions of Donald Trump and his allies against the nefarious ‘Deep State’ or ‘Cabal’ who are alleged to have undermined the former party’s efforts to ‘Make America Great Again,’ they are also characterised by an extreme degree of vagueness which demands epistemic work on the part of the QAnon subject.
Since these materials have been posted exclusively to anarchic and unarchived image boards – first 4chan, then 8kun (formerly 8chan) – Q Drop aggregators scrape, archive, and afford users means do ‘research’ with the Q Drops. A notable feature of the Q Drop aggregators is their increasing complexity over time: where QAnon.pub (established March 2018) is effectively wholly concerned enabling the analysis of the content of Q Drops, QAgg (April 2019) mines Drops for actual and esoteric meta-data, supposedly encrypted additional information that pushes the Q Drops’ semiosis to the point of potential exhaustion. The increasing granularity of how Q Drops are interpreted and applied in the ‘research’ afforded by QAgg specifically reflects a broader tendency towards the molecular intensification of QAnon subjects and speaks to a broader argument regarding precisely the ‘minor’ quality of QAnon’s technical apparatuses that make its reactionary manifestation at scale possible.
‘Research’ at a human scale
Despite the centrality of Q to the worldview of QAnon, they do not present themselves as, nor are they taken to be, a prophet bearing a revealed truth. Instead, Q characterises themselves as instructing their followers in what might be understood as a degraded form of ideology critique wherein the asserted reality of the phenomenon’s worldview is rendered visible in the mediatic traces of the world:
You are being presented with the gift of vision.
Ability to see [clearly] what they've hid from you for so long [illumination].
Their deception [dark actions] on full display.
People are waking up in mass.
People are no longer blind. (Q Drop 4550, square brackets in original)[1]
'Research’ in QAnon is typically characterised by the mapping of contemporary events to the content or metadata of Q Drops by QAnon subjects, with Q occasionally intervening to correct or confirm QAnon subjects’ inferences and findings. Beyond the initial series of Drops where Q claimed that the arrest of Hillary Clinton was imminent – “between 7:45 AM - 8:30 AM EST on Monday - the morning on Oct 30, 2017” (1) – they very rarely make explicit claims as to the future. Instead, Q tends to vaguely intone on contemporary events or ‘correct’/‘verify’ the findings of QAnon subjects. Here, the failed prediction that was the basis of the very first Q Drop is illustrative. While Hillary Clinton was not arrested, the 2017-2019 Saudi Arabian purge began some days after the first Q Drop (namely, on the 4th of November 2017) with a wave of arrests across the Gulf State. In response, a user of 4chan’s /pol/ board posited that it was in fact this event that Q was in fact alluding to (fig. 1). Per Q in their reply to said user: “Very smart, Anon. Disinformation is real. Distractions are necessary” (72). In essence, the first Q Drops were framed as about the then-forthcoming purge, with the discussion of Hillary Clinton being misdirection to run cover for this operation.
Rather than mirroring the didactic pedagogy and unaccountable epistemic hierarchies of the so-called "mainstream media" (Pamphlet Anon and Radix 93), Q is seen as instructing QAnon subjects in a particular way of seeing and mode of inquiry. As the QAnon promoter David Hayes (a.k.a. ‘Praying Medic’) explains in a passage on this topic that is worth quoting at length:
Q uses the Socratic method. Using questions, he’ll examine our current beliefs on a given subject. He'll ask if our belief is logical, then drop hints about facts we may not have uncovered, and suggest an alternative hypothesis. He may provide a link to a news story and encourage us to do more research. The information we need is publicly available. We're free to conduct our research in whatever way we want. We're also free to interpret the information however we want. We must come to our own conclusions because Q keeps his interpretations to a minimum. For many people, researching for themselves, thinking for themselves, and trusting their own conclusions can make following Q difficult. When you’re accustomed to someone telling you what to think, thinking for yourself can be a painful adjustment. (Hayes 17)
While Q possesses a certain authority in terms of having the proverbial ‘last word’ with regard to the work of QAnon subjects, this is not exercised in most cases. It is always the QAnon subject’s obligation to ‘do your own research’ – which is, again, the mapping of contemporary events to the content, metadata, and meaning of the Q Drops. Indeed, despite Q’s ostensibly ‘final’ authority with regard to what is and is not an aspect of the phenomenon’s worldview, there are some instances where the figure has been effectively ignored due to the salience of the individual and their ‘research.’ For example, there many QAnon subjects who believe that the deceased son of the assassinated president John F. Kennedy – John F. Kennedy Jr. – is still alive (Sommer, “QAnon, the Pro-Trump Conspiracy Theorists, Now Believe JFK Jr. Faked His Death to Become Their Leader”), and this despite Q’s explicit denial thereof (fig. 2).
QAnon’s fetishization of individual interpretation as well as the salience of primary sources therein has been identified by Marwick and Partin as an instance ‘scriptural inference.’ Tripodi characterises scriptural inference as a prevailing epistemology among religious and right-wing actors in the United States wherein “those who believe in the truth of the Bible approach secular political documents (e.g., a transcript of the president’s speech or a copy of the Constitution) with the same interpretative scrutiny” (6). While Tripodi notes an analogous compulsion among their research subjects to “do their own research” (6), the extent to which epistemic authority is located in the ‘researching’ subject is unclear. In comparison, the centrality of a particular QAnon subject’s ‘research’ to themselves is an explicit refrain: even prominent QAnon promoters describe their findings with this qualification (Colley; Dylan Louis Monroe at Conscious Life Expo 2019; The Fall of the Cabal).
The overarching impression is that ‘research’ within QAnon is not so much about working towards the production of a body of knowledge that all QAnon subjects can agree upon rather than it is concerned with the proliferation of many personalised ‘truths.’ As in the Q Drops, as in the mediatic traces of the world–all are material available for the individual’s interpretation of one in terms of the other towards the production of increasingly personalised and complex ‘research.’ That these conditions work to produce an worldview that is characterised at the micro- and macro-levels by a swirling mess of complexity and contradictions is simply taken as evidence of the phenomenon’s good health; there is no ‘groupthink’ (Hayes).
Nevertheless, the phenomenon’s internal heterogeneity all points to the asserted ‘truth’ of QAnon’s worldview, with contrary analysis pathologized as either being the uncritical work of someone in the thrall of the ‘mainstream’ episteme or deliberately malicious efforts of the Deep State and its agents. Actual difference – being that which is definitionally other to a subject or particular set of conditions – is not tolerated within QAnon. What is true of the phenomenon’s epistemology is also true of its worldview and accounts for QAnon’s hostility towards minoritarian movements. To the QAnon subject, America being made ‘Great Again’ is a fantasy of fascist restoration, a perverted ‘end of history’ wherein the conditions for the different or new are permanently evacuated.
Scales of ‘Research’: the Q Drop Aggregators
While QAnon has arguably always been a cross-platform phenomenon (Zadrozny and Collins), the figure of Q themselves is closely associated with image boards 4chan, 8chan, and 8kun. Indeed, the primary mechanism though which Q’s dispatches are considered authentic is by way of their posting exclusively to the image board they call home–presently 8kun–with their current tripcode.[2] While providing a basically adequate means for performing the apparent provenance for these ambiguous missives, this practice of “no outside comms” (465) beyond the anarchic and ephemeral image boards where Q dwells generates a certain tension with the previously discussed injunction to ‘do your own research.’ In this respect, it is necessary to explain the technical conditions within which QAnon emerged in order to understand the parallel development of archival infrastructures.
4chan was launched in 2003 by the fourteen-year-old Christopher Poole as an English language clone of the Japanese board 2chan.ner (Beran). Given the lack of server space initially available to him, Poole elected to limit the number of threads on any given board and archive nothing. Combined with the site’s default username–‘Anonymous’–and a laissez-faire moderation policy, Poole somewhat unwittingly created the conditions for the emergence of an extraordinarily dynamic and culturally significant milieu whose influence can be seen across digital culture as well as in the strategies of activist groups ranging from Occupy Wall Street, to Anonymous, to – more recently – the ‘alt-right’ and QAnon (Coleman; Phillips et al.). 8chan, meanwhile, was launched in 2013 by Fredrick Brennan and became prominent among 4chan’s more reactionary users in 2014 as a ‘free speech’-guaranteeing clone of 4chan, which at the time had banned any mention of the misogynist witch-hunt known as ‘gamergate’ (Marwick and Lewis; Sandifer). In 2019, after the respective perpetrators of the Christchurch, Poway, and El Paso massacres associated themselves with the site, it was removed from the clearnet for approximately a month before relaunching as ‘8kun’ (Hagen et al.; Keen)
While the ephemerality of 4chan was initially a means to manage limited computational resources, this quality has since come to define the culture of ‘the chans’–the global array of websites with similar affordances and user cultures that include, but are not limited to 4chan, 8chan, and 8kun (see De Keulenaar). For instance, 4chan and 8chan are both characterised by the strictly limited number of active threads (200 for 4chan, 355 for 8chan/kun) with only the most commented upon (or ‘bumped’) persisting until they too – after running out of steam or reaching the boards’ ‘bump limit’ (300 and 750 comments, respectively) – are inevitably ‘pruned’ (permanently deleted) to make way for new posts. Given the febrile rate of posting among both boards’ extensive userbases, any given thread has a strikingly short lifespan in comparison to mainstream social media platforms with a significant amount of content being pruned within a matter of minutes and the longest-lived threads persisting for only a handful of hours (Hagen, “Rendering Legible the Ephemerality of 4chan/Pol/ – OILab”).
The prevailing view on this scalar compression of many users into an extremely limited discursive environment is that it applies a kind of Darwinian pressure on the content posted to the boards (Moot’s Final 4chan Q&A). As there are no archives, content only endures if it survives this evolutionary stress and enters the embodied memory of the userbase. Although there are user-developed mechanisms of reposting to ‘counter’ this ephemerality and allow discussions to continue over longer periods of time than might be possible otherwise – for instance, through the practice of creating and maintaining ‘general threads’ on a particular topic that are revived at the point of their reaching the boards’ bump limit –these nevertheless still primarily deal in the repetition of content by reiterating a particular line of argument, reposting a particular meme, etc., rather than archiving it (OILab). Indeed, despite the fact that there are extensive and accessible archives of these boards, these infrastructures do not really figure in the discourses of 4chan and 8chan/kun as it is occurring, or indeed, could not even be implemented given the feverish temporality of posting (Hagen, “‘Who Is /Ourguy/?”).
As a result, if one were to look for the Q Drops in-situ, they would find them spread across three websites, containing seven boards therein (chronologically: /pol/ on 4chan, then /CBTS/, /TheStorm/, /GreatAwakening/ on 8chan, and /QResearch/, /patriotsfight/ and /projectdcomms/ on 8kun) with local archives for these boards ranging from non-existent (4chan) to extremely patchy and unsearchable (8chan/kun), to say nothing about the veritable ocean of unrelated and likely obscene content that one would also encounter. Under such conditions, QAnon’s ur-text appears as a distributed and disjointed series of image board posts with unstable authorship. Q drop aggregators intervene at this point, collecting the (currently 4,966) Q drops into an online archive and presenting them as a coherent corpus through which ‘research’ can occur. QAnon subjects do not need to navigate the hostile interface and culture of a chan board–and few do (see “Do You Believe in Coincidences?”).[3] Instead, they can ‘research’ Q Drops at their leisure on the aggregators. Additionally, the Q Drop aggregators afford the circulation of Drops across the wider web, including the major corporate platforms despite QAnon’s ostensive ‘deplatforming’ after its pandemic-facilitated ‘boom’ and the events of 6 January 2020 (O’Connor et al.). In essence, by enabling distributed small scale acts of individual‘research’ on the part of QAnon subjects, Q drop aggregators facilitate the production of QAnon at the immense scale that the phenomenon has achieved. The paper will now proceed with a comparative analysis of how two major Q Drop aggregators (QAnon.pub and QAgg) make their materials available for users’ ‘research’ efforts.
QAnon.pub
QAnon.pub’s domain was registered on the 7th of March 2018 (DomainTools, Whois Record for QAnon.Pub) and captured by the Internet Archive’s Wayback Machine on the 9th of the same month. As such, it is not only the oldest of the aggregators discussed in this article, but is also very likely to have been the first of the Q Drop aggregators. Indeed, its pseudonymous developer (‘qntmpkts’) is credited with aiding in the development of the open-source 8kun scraping software that is basis of many other Q Drop aggregators (see Aliapoulios et al.; QAlerts). At the time of writing QAnon.pub’s collection consists of 4,966 Q Drops, 110 Q Proofs, and 349 ‘answers’ to specific drops.
QAnon.pub’s interface presents the user a reverse-chronological grid of Q Drops, bearing essentially the same metadata that would appear on a chan board–albeit without the measures of direct engagement à la the list of replies that would appear on the post in-situ; as well as lacking the context that would account for the salience of the tripcode, thread ID, and post number (fig. 4, 5). Drops are also grouped according to the date they were made and numbered for–presumably–ease of reference. Insofar as the material of the Q Drops themselves is concerned, their articulation in QAnon.pub suggests that the aggregator is concerned with the simple provision of the Q Drops as – essentially – texts.
Such an impression is furthered in the aggregator’s affordances. The search bar is a simple filter for the content of the Q Drops (one cannot search a particular Drop number or filter by date, for example), the Drops can only be ordered in chronological or reverse-chronological sequence, and the child window that appears when clicking “ANSWERS” button displays a line-by-line of the Drop by way of text taken from the ‘STORM is HERE’ spreadsheet, and occasionally via a Q proof (fig. 6, 7).[4] Here, in essence, Q Drops (which were originally distributed in time and space) are formatted in such a way that they resolve into a larger text, and as such QAnon.pub enables the analysis of their textual and narrative elements–in linear time–more or less exclusively.
QAgg
QAgg’s domain was registered on the 3rd of November 2019–although the site’s changelog states that the site itself was launched on the 3rd of April 2019 (see DomainTools, Whois Record for QAgg.News; TechmasterQ). At the time of writing (April 2023), the site has been down since late November/early December 2022. In addition to its 4,966 Q Drops, QAgg also maintained extensive archives of posts made to Twitter, Gab, and TRUTH Social by QAnon-relevant figures.
Likely as a result of its being the most recently established major Q Drop aggregator, QAgg is the most complex example of such an archival infrastructure within QAnon. Here, ‘complex’ is not intended to suggest sophistication or legitimacy on the part of QAnon’s ‘researchers’ or their methods, but more akin to a kind of paranoid psychosis where the Q Drops and other are rendered available ‘researched’ at increasingly molecular scales of detail and abstraction. On QAgg, this impulse is most evident in the aggregator’s emphasis on metadata.
While users can engage with QAgg in a largely textual capacity as one would consult QAnon.pub, the utility of QAgg is in how the aggregator makes the metadata of Q Drops available for users’ ‘research’ (fig. 9). This metadata ranges from the relatively grounded provision of a Drop’s timestamp in unix epoch time and the analysis of the EXIF data in the images of a given Drop to the decidedly esoteric in the form of numerology and ‘deltas.’ The visibility of metadata is toggled by way of the site’s ‘digging options’ (fig. 9), which also affords the ability to filter or rearrange the order of the materials in the interface. QAgg’s search, furthermore, affords the use of sophisticated queries wherein a user can search the aggregator’s materials by date and time (or a range thereof), by Q clock minute, platform or ‘player,’ as well as using Boolean ‘AND’ and ‘OR’ operators to string these specific queries together (TechmasterQ).
Given that QAgg’s archive contains a wealth of additional materials (Twitter, Gab, and TRUTH Social posts from several figures who have–in one way or another–become incorporated into QAnon’s worldview), the act of mapping Q Drops onto the external materials and events in the world is effectively automated by way of the various means of corpus-building within its interface. In essence, the affordances of QAgg represent the methodological avant-garde of QAnon ‘research,’ wherein every thinkable mechanism for the extraction of meaning, bringing into relation, and ultimate production of ‘research’ findings have been operationalised upon its materials–and if these affordances are not sufficient than a user can go to QAgg’s ‘Data Science’ tab and download JSONs or CSVs of QAgg’s archives and perform whatever further analysis upon these materials they wish. Although QAgg’s preoccupation with metadata speaks to a certain effort towards the appropriation of data science’s methods in the aggregator, the provision of these files warrants further reflection. Specifically, because it represents an attempt to utilise technical apparatus of neoliberal governmentality and epistemology (Chun) – aspects of the QAnon subject’s alleged ‘oppressors’ – towards furthering the epistemic basis of the phenomenon’s fascist worldview.[5]
While the actual content of QAgg’s materials is effectively secondary to their metadata, QAgg nevertheless offers several means through which users can share Q Drops to other platforms and, therefore, facilitate the dissemination of their ‘research.’ This includes means to generate a direct link to a given Q drop, copy its text, or generate a jpeg image of it as it appears on QAgg. The most interesting distribution tool, however, is ‘digital camo.’ This affordance produces a randomly generated dazzle camouflage pattern underneath the content of a Q Drop in an effort to evade image recognition-based moderation systems and therefore–in theory–allow for the circulation of Q drops on platforms where QAnon has been banned (Facebook, Google, Twitter, see also fig. 10). Whether or not such mechanisms of evasion work, such anticipation of machinic moderation speaks to questions of QAnon’s potential to develop minor tech in the recognition and subversion of platform governance strategies. It also speaks to particular mechanisms of subject formation within QAnon where the stigmatisation of this material serves to reify it as – essentially – ‘what they don’t want you to see’ (see Barkun). That the effort to subvert this marginalisation takes the form of a camouflage is also instructive as it speaks to the militaristic ontology of the QAnon subject; they are not just ‘researchers,’ but “digital soldiers” (Roose) in an insurgency.
Figure 10: Some examples of digital camo as applied to Q drop 4966. |
Conclusion
While QAnon is definitionally ‘minor’ in the scale of its technical apparatuses and insofar as it too is characterised by “the deterritorialization of language, the connection of the individual to a political immediacy, and the collective arrangement of utterance” (Deleuze and Guattari 18), the fascist ontology and political project that these tactics and infrastructures produce suggests a certain point of difference that warrants further reflection. That is, where the minor is typically concerned with generating – or making space for – difference within the linguistic/technical apparatuses of a hegemon or oppressor, the space of epistemic difference that QAnon has carved out for itself is only different insofar as it is opposed to the political and epistemic order of the neoliberal regime, while remaining essentially hostile to that which remains Other. In fact – and despite the epistemic heterogeny of the phenomenon – all ‘research’ effectively points to the asserted truth that the phenomenon’s worldview bears the decidedly more concerning implication that the final purpose of the QAnon’s minoritarian tactics and tech is aggressively counter-revolutionary. Namely, it is intent on the erasure of difference from the social formation in favour of a reconfiguration of symbolic authority towards the so-called ‘restoration’ of what is perceived to be the QAnon subject’s ‘rightful’ subject position. The increasingly molecular scale of ‘research’ in QAnon can, therefore, be understood as an effort towards scaling down the world’s heterogeny and difference within the flat onto-ideological field of QAnon’s worldview, which potentially accounts for the deterritorializing vitality of this ‘conspiracy of everything’ (Rothschild). QAnon’s tactics and infrastructures can, therefore, trouble the concept of the minor; and suggest a need to grapple with questions of scale, subjectivation and the technicity of ignorance in addressing the problem of contemporary fascisms.
Notes
- ↑ Hereafter Q Drops are referenced as ‘(Drop number).’ Drop numbers are digits that mark where a particular Drop is located in the chronological sequence of Q’s posts as they appear within the interface of a Q Drop aggregator. While there is some disagreement as to what is and is not an authentic Q Drop among Q Drop aggregators and therefore some disparities in the Drop numbers for specific Drops (Aliapoulios et al.), the numbering of Q Drops between this paper’s case studies is identical and therefore used herein.
- ↑ Per Drop 465 (after the Q’s move from 4chan to 8chan due to the latter’s being ‘compromised’): “No other platforms used. No comms privately w/ anyone.” In reality, this move (platform and rhetorical) was likely an effort on the part of one of the individuals behind Q to consolidate their control over the account (see “Calm Before the Storm”). A tripcode is a cryptographic hash of a user’s password which, when implemented, essentially acts as a username in that it allows for a user to be identified on the typically entirely anonymous chan boards.
- ↑ Many QAnon subjects may, in fact, be entirely unaware of the boards. For example, the Capitol Riot’s ‘poster boy’ (Hsu) Doug Jensen–in an interview with FBI agents a few days after the events of January 6th–located the origin of the phenomenon in a Q Drop aggregator (either QAnon.pub or QMap): “it started off on Twitter – no, it started off with q.pub, and then q.pub got shut down, and now I have another one, it's like qalert.something” (Jensen 7).
- ↑ Based on the document’s comment history (as version history is unavailable) the ‘STORM is HERE’ spreadsheet appears to have been a collaborative effort at a line-by-line analysis of all Q drops in chronological order hosted on Google Sheets. Although at the time of writing the document has now been taken down by Google for violating the platform’s terms of service, an archived version from the 17th of March 2022 can be accessed via the following link: https://docs.google.com/spreadsheets/d/1eQXM6KLDcVGyMXhqJxNFyA8TvBoSibvFRZOVYaMPjT8/edit?usp=sharing
- ↑ See also Lee et al. on Covid-19 sceptics’ use of analogous rhetoric and methods towards similarly reactionary aims.
Works cited
Aliapoulios, Max, et al. “The Gospel According to Q: Understanding the QAnon Conspiracy from the Perspective of Canonical Information.” ArXiv:2101.08750v2 [Cs], May 2021. arXiv.org, https://arxiv.org/abs/2101.08750v2.
Barkun, Michael. A Culture of Conspiracy: Apocalyptic Visions in Contemporary America. University of California Press, 2003.
Beran, Dale. It Came from Something Awful: How a Toxic Troll Army Accidentally Memed Donald Trump into Office. First edition, All Points Books, 2019.
“Calm Before the Storm.” Q: Into the Storm, directed by Cullen Hoback, 1, HBO, 21 Mar. 2021.
Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. The MIT Press, 2011.
Coleman, E. Gabriella. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Verso, 2014.
Colley, Lori. “The Day I Knew Q Wasn’t a Hoax.” QAnon: An Invitation to the Great Awakening, edited by Captain Roy D and Dustin Nemos, 2019.
De Keulenaar, Emillie. “Freedom and Taboos in the International Ghettos of the Web – OILab.” OILab, 2018, https://oilab.eu/freedom-and-taboos-in-the-international-chanosphere/.
de Zeeuw, Daniël, and Alex Gekker. “A God-Tier LARP? QAnon as Conspiracy Fictioning.” Social Media, 2023.
Deleuze, Gilles, and Félix Guattari. Kafka: Toward a Minor Literature. University of Minnesota Press, 1986.
“Do You Believe in Coincidences?” Q: Into the Storm, directed by Cullen Hoback, 2, HBO, 21 Mar. 2021.
DomainTools. Whois Record for QAgg.News. https://whois.domaintools.com/qagg.news. Accessed 28 June 2022.
---. Whois Record for QAnon.Pub. https://whois.domaintools.com/qanon.pub. Accessed 28 June 2022.
Dylan Louis Monroe at Conscious Life Expo 2019. Directed by Deep State Mapping Project, 2019. YouTube, https://www.youtube.com/watch?v=JzVW3KR1Dqw.
Hagen, Sal, et al. Infinity’s Abyss: An Overview of 8chan – OILab. 2019, https://oilab.eu/infinitys-abyss-an-overview-of-8chan/.
Hagen, Sal. “Rendering Legible the Ephemerality of 4chan/Pol/ – OILab.” OILab, 2018, https://oilab.eu/rendering-legible-the-ephemerality-of-4chanpol/.
---. “‘Who Is /Ourguy/?’: Tracing Panoramic Memes to Study the Collectivity of 4chan/Pol/.” New Media & Society, Feb. 2022, p. 146144482210782. DOI.org (Crossref), https://doi.org/10.1177/14614448221078274.
Hayes, David. Calm Before the Storm. Self Published, 2020.
Hsu, Spencer S. “QAnon ‘Poster Boy’ for Capitol Riot Sent Back to Jail after Violating Court Order to Stay off Internet.” Washington Post, 2 Sept. 2021. www.washingtonpost.com, https://www.washingtonpost.com/local/legal-issues/douglas-jensen-jailed-qanon-addiction/2021/09/02/50ee9628-0c08-11ec-aea1-42a8138f132a_story.html.
Jensen, Douglas. Interview of: Douglas Austin Jensen. Interview by Tyler Johnson and Scott James, 8 Jan. 2021, https://s3.documentcloud.org/documents/21581097/jensen-transcript.pdf.
Keen, Florence. “After 8chan – Centre for Research and Evidence on Security Threats.” Centre for Research and Evidence on Security Threats, 4 Nov. 2020, https://crestresearch.ac.uk/comment/after-8chan/.
Kir, Freja, et al. Scientometrics of Conspiracy Creation: Tracing Conspiracy Making on 4Chan. University of Amsterdam, 2020, https://wiki.digitalmethods.net/Dmi/QAnon-ScientometricsofConspiracyCreationTracingConspiracymakingon4Chan.
Lee, Crystal, et al. “Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online.” Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, May 2021, pp. 1–18. arXiv.org, https://doi.org/10.1145/3411764.3445211.
Marwick, Alice, and Rebecca Lewis. Media Manipulation and Disinformation Online. Data & Society Research Institute, 2017.
Marwick, Alice, and William Partin. “Constructing Alternative Facts: Populist Expertise and the QAnon Conspiracy.” New Media & Society, 2022, p. 21.
Moot’s Final 4chan Q&A. Directed by 4chan, 2015. YouTube, https://www.youtube.com/watch?v=XYUKJBZuUig.
O’Connor, Ciaran, et al. The Boom Before the Ban: QAnon and Facebook. Institute for Strategic Dialogue, 12 Dec. 2020.
OILab. The Baker’s Guild: The Secret Order Countering 4chan’s Affordances – OILab. 2018, https://oilab.eu/the-bakers-guild-the-secret-order-countering-4chans-affordances/.
Pamphlet Anon and Radix. “Changing the Narrative: Trump and the Media.” QAnon: An Invitation to the Great Awakening, edited by Captain Roy D and Dustin Nemos, 2019, pp. 93–104.
Phillips, Whitney, et al. “Trolling Scholars Debunk the Idea That the Alt-Right’s Shitposters Have Magic Powers.” Vice, 22 Mar. 2017, https://www.vice.com/en_us/article/z4k549/trolling-scholars-debunk-the-idea-that-the-alt-rights-trolls-have-magic-powers.
QAlerts. “Sqraper: PHP 8Kun Q Post Scraper.” GitHub, 5 June 2020, https://web.archive.org/web/20200605191831/https://github.com/QAlerts/Sqraper.
Roose, Kevin. “A QAnon ‘Digital Soldier’ Marches On, Undeterred by Theory’s Unraveling.” The New York Times, 17 Jan. 2021. NYTimes.com, https://www.nytimes.com/2021/01/17/technology/qanon-meme-queen.html.
Rothschild, Mike. The Storm Is upon Us: How QAnon Became a Movement, Cult, and Conspiracy Theory of Everything. Monoray, 2022.
Sandifer, Elizabeth. Neoreaction a Basilisk: Essays on and around the Alt-Right. 2018.
See, Rose. From Crumbs to Conspiracy: Swarthmore College, 2019.
Sommer, Will. “QAnon, the Pro-Trump Conspiracy Theorists, Now Believe JFK Jr. Faked His Death to Become Their Leader.” The Daily Beast, 2 Aug. 2018. www.thedailybeast.com, https://www.thedailybeast.com/qanon-the-pro-trump-conspiracy-theory-now-believes-jfk-jr-faked-his-death-to-become-its-leader.
---. Trust the Plan: The Rise of QAnon and the Conspiracy That Reshaped the World. 4th Estate, 2023.
TechmasterQ. “FAQ.” QAgg.News, https://qagg.news. Accessed 31 Oct. 2022.
Tripodi, Francesca. Searching for Alternative Facts. Data & Society Research Institute, 16 May 2018, p. 64.
The Fall of the Cabal. Directed by Janet Ossebaard, 2020, https://www.bitchute.com/video/aDjnD3tsbFSH/.
Zadrozny, Brandy, and Ben Collins. “How Three Conspiracy Theorists Took ‘Q’ and Sparked Qanon.” NBC News, 15 Aug. 2018. www.nbcnews.com, https://www.nbcnews.com/tech/tech-news/how-three-conspiracy-theorists-took-q-sparked-qanon-n900531.
Teodora Sinziana Fartan
Rendering Post-Anthropocentric Visions:
Worlding As a Practice of Resistance
Rendering Post-Anthropocentric Visions: Worlding As a Practice of Resistance
Abstract
This paper formulates a strategic activation of speculative-computational practices of worlding by situating them as networked epistemologies of resistance. Through the integration of Deleuze and Guattari's concept of a ‘minor literature’ with the distributed software ontologies of algorithmic worlds, a tentative politics for thinking-with worlds is mapped, anchored in the potential of worlding to counter the dominant narratives of our techno-capitalist cultural imaginary. With particular attention to the ways in which the affordances of software can become operative and offer alternative scales of engagement with modes of being-otherwise, an initial theoretical mapping of how worlding operates as a multi-faceted and critical storytelling practice is formulated.
Introduction
Emanating from the fog of late techno-capitalism, the contours of a critical techno-artistic practice are starting to become visible - networked, immaterial and often volumetric, practices of worlding surface as critical renderings concerned with speculatively envisioning modes of being otherwise through computational means. By intersecting software and storytelling, these practices cultivate more-than-human assemblages that foreground possible world instances - worlding, thus, becomes politically charged as a networked epistemology of resistance, where dissent is enabled through the rendering of alternative knowledge systems and relational entanglements existing beyond the ruins of capitalism.
In the ontological sense, practices of worlding materialise as algorithmic portals into fictional terrains where alternative modes of being and knowing are envisioned; they refuse to adopt a totalising view of the megastructure of capitalism’s cultural imaginary and instead opt to zoom in onto the cracks appearing along its edges, where other narrative possibilities are starting to sprout and multiply. Through the evocative affordances of software, practices of worlding teleport us forwards, amidst the ruins of the Anthropocene, where “unexpected convergences” emerge from the debris of what has passed (Tsing 205).
In their quests for speculative possibility, world-makers are dislodging existing hi-tech systems and platforms from their conventional economical or institutional roles and repurposing them as technologies of possibility which seek to de-centre the dominant narratives of the Western cultural imagination. A reversing of scales therefore occurs, where 'high tech' becomes deterritorialized and mobilised towards the objectives of a 'minor tech', which seeks to counter the universal ideals embedded in technologies through foregrounding "collective value" (Cox and Andersen 1).
Consequently, recent years have seen an increased interest in the (mis)use of software such as game engines or machine learning for the artistic exploration of crossovers between the technological, the ecological and the mythical; specifically, through the emergence of increasingly capable and accessible platforms such as Unreal Engine and Unity, game engines have become the creative frameworks of choice for conjuring worlds due to their potential for rapid prototyping and increased capacity of rendering complex, real-time virtual imaginaries. Whilst worlding can exist across a spectrum of algorithmically-driven techniques and systems, it is most often encountered through (or integrates within its technological assemblage) the game engine, as we will see in the course of this paper.
In what follows, I aim to at once activate an initial cartography of ‘worlding’ as an emergent techno-artistic praxis and propose a tentative politics for thinking not only through, but also with worlding as a process that can facilitate ways of imagining outside the rigid narratives of techno-scientific capitalism.
I propose that it is particularly through its refiguring of computational methodologies that worlding positions itself as an exercise in creative resistance. Through a refiguration of technology as a speculative tool, worlding offers a potent method for thinking outside of our fraught present by algorithmically envisioning radically different ontologies - these modes of being-otherwise, I contend, also bring forth a new epistemological and aesthetic framework rooted in both the affordances of the technological platforms used for their production and the relational assemblages at their core: the network, in itself, becomes unearthed throughout this paper as the essence of algorithmic world instances and is proposed as a mode of conceptualisation for these practices.
Within the context of political resistance, by approaching these algorithmically-rendered worlds through the lens of Deleuze and Guattari’s concept of a 'minor literature' (16), we can trace the emergence of minor worlds as potent and powerful assemblages for countering the majority worlds of platform capitalism and their dominant socio-cultural narratives - what can these minor worlds reveal about more-than-human collaborations and the critical role of software within speculative practices? How do they become operative as instruments for de-centering the master narratives of our present? What alternative knowledges do they draw upon within their ontologies and what potentialities do they open up for encountering these?
Throughout this paper, the worlds conjured by artists such as Ian Cheng, Sahej Rahal, Keiken and Jenna Sutela will be drawn on in order to gain insight into the ways in which worlding at once becomes operative as a form of social and political critique and activates a process of collective engagement with potent acts of futuring, where a co-existence together and alongside the non-human is foregrounded.
Worlding in the age of the anthropocene
Today, there seems to be a widespread view that we are living at the end - of liberalism, of imagination, of time, of civilisation, of Earth; engulfed in the throes of late capitalism, conjuring a possible alternative seems exceptionally out of grasp. In his novel Pattern Recognition, which constitutes a reflection on the human desire to detect patterns and meaning within data, William Gibson formulates a statement that rings particularly relevant when superimposed onto our present state:
we have no idea, now, of who or what the inhabitants of our future might be. In that sense, we have no future. Not in the sense that our grandparents had a future, or thought they did. Fully imagined cultural futures were the luxury of another day, one in which 'now' was of some greater duration. For us, of course, things can change so abruptly, so violently, so profoundly, that futures like our grandparents' have insufficient 'now' to stand on. We have no future because our present is too volatile […] We have only risk management. The spinning of the given moment's scenarios. Pattern recognition. (57)
Here, Gibson makes reference to the near-impossibility of imagining a clear-cut future in a present that is marred by ecological, political and social unrest - I contend that this fictional excerpt is distinctly illustrative of the affective perception of life within the age of the anthropocene, where the volatility of the present, caused by the knowledge that changes on a planetary scale are imminent, ensures that a given future can no longer be predicted or visualised. Without the ability to rationally deduce a logical outcome, what we, too, are left with is a sort of pattern recognition - an attempt to find patterns for ways of being and knowing that can become the scaffold for visions of the future; as Gibson foregrounds, today, rather than being logically deducible, the future needs to be sought through the uncovering of new patterns.
Just like Gibson's character, we do not know what kind of more-than-human assemblages will inhabit our future states - and it is precisely here that this act of pattern recognition intersects with the core agenda of worlding: how can we envision patterns of possible futures using computation? Within our own contemporary context, where asymmetrical power structures, surveillance capitalism and the threat of climate change deeply complicate our ability to think of possible outcomes, where can new patterns emerge?
In the wake of the Anthropocene, feminist critical theory has launched several calls for seeking such patterns with potential to provide a foothold for experiments in imagining future alternatives: from Stenger’s bid to cultivate “connections with new powers of acting, feeling, imagining and thinking” (24), to Haraway’s request for critical attention to “what worlds world worlds” ("Staying with the trouble" 35) and LeGuin’s plea for a search for the ‘other story’ (6) - an alternative to the linear, destructive and suffocating narratives regurgitated perpetually within the history of human culture. We can, therefore, trace the emergence of a collective utterance, an incantation resonating across feminist epistemologies, emphasising the urgency of developing patterns for thinking and being otherwise - as Rosi Braidotti asks, “how can we work towards socially sustainable horizons of hope, through creative resistance?” (156)
In a reality marred by a crisis of imagination, where “it is easier to imagine the end of the world than that of capitalism” (Fisher 1), casting one’s imagination into a future that refuses the master narratives of capitalism is no easy feat, and requires, as Palmer puts it, a "cessation of habitual temporalities and modes of being" ("Worlding") in order to open up spaces of potentiality for speculative thinking - to think outside ourselves, towards possible future alternatives, has therefore become a difficult exercise within the current socio-political context.
We can then identify the most crucial question for the agenda of worlding: what comes after the end of our world (understood here as capitalist realism (Fisher 1))? Or, better phrased, what can exist outside the scaffolding of reality as we know it, dominated by asymmetric power structures, infused with injustice, surveilled by ubiquitous algorithms and continuously subjected to extractive practices? And what kind of technics and formats do we need to visualise these modes of being otherwise?
Techno-artistic worlding practices attempt to intervene precisely at this point and open up new ways of envisioning through their computational nature - which, in turn, produces new formats of relational and affective experience through the generative and procedural affordances of software. The world-experiments that emerge from these algorithmic processes constitute hybrid assemblages of simulated spaces, fictive narratives, imagined entities and networked entanglements - collectively, they speculatively engage with the uneven landscape of being-otherwise, its multiplicities and many textures and viscosities.
Listening to the operational logic of computationally-mediated worlds
To begin an analysis of how worlding attempts to engage with the envisioning of alternatives, we'll first turn to Donna Haraway, who further instrumentalizes the idea of patterning introduced earlier through Gibson: when situating worlding as an active ontological process, she says that "the world is a verb, or at least a gerund; worlding is the dynamics of intra-action [...] and intra-patience, the giving and receiving of patterning, all the way down, with consequences for who lives and who dies and how" ("SF: Science Fiction, Speculative Fabulation, String Figures, So Far" 8). By making the transition from noun to verb, from object to action, worlds and patterns become active processes of worlding and patterning. In Haraway's theorising of speculative fabulation, patterning involves an experimental processes of searching for possible "organic, polyglot, polymorphic wiring diagrams" - for a possible fiction, whilst worlding encapsulates the act of conjuring a world on the basis of that pattern ("SF: Science Fiction, Speculative Fabulation, String Figures, So Far" 2). Furthermore, Haraway situates worlding as a practice of collective relationality, of intra-activity between world-makers and world-dwellers, as well as between world and observer, through a networked process of exchange. It is important to note that worlding, to Haraway, is far from apolitical: she evidences its relevance by defining it as a practice of life and death, which has the potential to engage in powerful formulations of alternatives - acts which might be crucial in establishing actual future states. As she argues, “revolt needs other forms of action and other stories of solace, inspiration and effectiveness” ("Staying with the Trouble" 49)
To gravitate towards an understanding of these other stories, we'll approach worlding in context through the eyes of Ian Cheng, an artist working with live simulations that explore more-than-human intelligent assemblages. Cheng defines the world, as “a reality you can believe in: one that promises to bring about habitable structure from the potential of chaos, and aim toward a future transformative enough to metabolise the pain and pleasure of its dysfunction” ("Worlding Raga") - a world, in this perspective, needs to be an iteration of the possible, one that presents sufficient transformative power for existing otherwise; the referencing of 'belief' is also crucial here as, within capitalist realism, where all "beliefs have collapsed at the level of ritual or symbolic elaboration" (Fisher 8), its very activation becomes and act of revolt.
Of worlding, Cheng says that it is “the art of devising a World: by choosing its dysfunctional present, maintaining its habitable past, aiming at its transformative future, and ultimately, letting it outlive your authorial control” ("Worlding Raga") - the world-maker, therefore, does not only ideologically envision a possible reality, but also renders it into existence through temporal and generative programming. Cheng balances this definition within the context of his own practice concerned with emergent simulations, where authorship becomes a distributed territory between the human and more-than-human.
It is important to note that Cheng refuses to ascribe any particular form, medium or technology as an ideal template of worlding - rather, discreetly and implicitly, Cheng’s definition evokes the operational logic of algorithms by referencing the properties of intelligent and generative software systems. The definiton's refusal of medium-specificity mirrors the multiplicity of ways in which algorithms can world: whilst many of these worlds initially unfold as immersive game spaces (and then become machinima, or animated films created within a virtual 3D environment (Marino 1) when presented in a gallery environment), satellite artefacts can emerge from a world's algorithmic means of production, often becoming a physical manifestation of that world's entities - taking shape, for example, as physical renditions of born-digital entities, as seen in the sculptural works as that emerge from Sahej Rahal's world, Antraal, where figures of the last humans, existing in a post-species, post-history state, are recreated outside of the gamespace.
Transgressions of the fictional world into real-space can take a variety of shapes, depending on the politics and intentions of that world: other examples of worlds spilling out of rendered space and into reality are Keiken's Bet(a) Bodies installation, where a haptic womb is proposed as an emphatic technology for connecting with a more-than-human assemblage of animal voices and Ian Cheng’s BOB Shrine App that accompanied his simulation BOB (Bag of Beliefs) in its latter stages of development, through which the audience can directly interact with the AI by sending in app 'offerings', which impress what Cheng terms 'parental influence' on BOB, in order to offset its biases.
Consequently, it becomes apparent that practices of worlding are governed by an inherent pluralism - due to this multiplicity of possible tools and algorithms that can operate within the scales of worlding, we are in need of an open-ended definition that can encapsulate commonalities whilst also allowing for plurality of form - I propose here to focus on the unit operations making these worlds possible. From gamespace environments to haptic-sonic assemblages or interactive AI, the common denominator of all these artefacts does not lie in their media specificity, but rather in their software ontology and its procedural affordance, defined by Murray as "the processing power of the computer that allows us to specify conditional, executable instructions" ("Glossary").
A working definition for worlding that integrates unit operations with speculative logic can be therefore traced: worlding is a sense-making exercise concerned with metabolising the chaos of possibility into new forms of order through the relational structures enabled by procedural affordances. It involves looking for the logic that threads a world together and then scripting that logic into networked algorithms that render it into being. To world with algorithms is to dissent from the master narratives of capitalism by critically rendering habitable alternatives.
Crucial to this definition is an understanding of software as a cultural tool - its procedural affordances, as Murray reflects, have "created a new representational strategy, [...] the simulation of real and hypothetical worlds as complex systems of parameterised objects and behaviours" ("Glossary"). To understand the operative logic that enables procedural worlds, a similar pluriversal analytical model to that proposed by de la Cadena and Blaser (4) becomes necessary for conceiving these ecologies of practice - I propose, therefore, a conceptual model for understanding the symbolic centre of worlding by turning to the ways in which software itself creates and communicates knowledge: the network.
Reflecting on Tara McPherson's assertion that “computers are themselves encoders of culture” (36), being able to produce not only representations but also epistemologies, one must wonder, then: in the context of of algorithmic worlds, how do their networked cores become culturally charged? What kind of new knowledges become encoded in their procedural affordances?
Thinking with networks: an epistemic shift towards relationality
Another vector through which the nature of worlding can be theoretically approached emerges from Anna Munster’s theorising of networks, particularly her definition of ‘network anaesthesia’ - a term she develops to suggest the numbing of our perception towards networks, making their unevenness and relationality obscure (3). A similar anaesthesia can be identified when working with platformised tools such as game engines, where, as Freedman points out, "the otherwise latent potential of code, found in its modularity, is readily sealed over" - due to code becoming concretized into objects, the computational inner workings of certain aspects become blackboxed (Anable, 137). The trouble with engines is that, in our case, they promote a worlding anaesthesia, where the web of relations at play within that world instance is not immediately apparent due to their obscuring of software.
Wendy Chun speaks of a similar paradox to that of the network anaesthesia by referencing the ways in which computation complicates both visuality and transparency. Visuality in the sense of the proliferation of code objects that it enables, and transparency in the sense of the effort of software operations to conceal their input/output relationalities - visualising the network, therefore, becomes an exercises in revealing the inner workings of worlds, one that resists the intentional opacity of the platforms that become involved in their genesis.
Munster, too, calls for more heightened reflective and analytical engagements with “the patchiness of the network field” (2) by making its relations visible (and implicitly knowable) through diagrammatic processes. She contends that, in order to decode the networked artefact, we must attempt to understand the forces at play within it from a relational standpoint:
We need to immerse ourselves in the particularities of network forces and the ways in which these give rise to the form and deformation of conjunctions — the closures and openings of relations to one another. It is at this level of imperceptible flux — of things unforming and reforming relationally — that we discover the real experience of networks. This relationality is unbelievably complex, and we at least glimpse complexity in the topological network visualisation. (3)
For Munster, therefore, the structuring of relations and their interconnectedness is paramount to any attempt at making sense of the essence of a software artefact or system. This relational perspective towards networked assemblages opens up a potent line of flight for the conceptualisation of the processes involved in the rendering of worlds - if the centre of a world is a network, that can in itself sustain a number of inputs and outputs of varying degrees of complexity, interlinked in a constant state of flux, then any attempt to understand such a world must involve conceptual engagement with the essence of the network, or the processes through which relations open and close and produce these states of flux. Engagement with algorithmic worlds, therefore, moves from the perceptual into the diagrammatic, from a practice of observation to one of sense-making, involving not only visualisations but also a certain computational knowing, an understanding of relations and flows. I argue here that engagement with worlds necessitates an increased type of cognitive engagement, one that allows us to understand the object of discussion differently, through a foregrounding of relational exchanges.
I propose a turn towards cartographing the relations that operate within a world on an affective level, due to the spaces of evocative possibility opened up by a world's procedural affordances. Murray draws on EA's 1986 advert asking "Can a computer make you cry?" to reflect on the need for increased critical attention to be given to the ways in which affective relations form within a procedural space; she argues that "tears are an appropriate measure of involvement because they are physiological and suggest authenticity and depth of feeling" (84), but clarifies that it is precisely the visceral aspect of crying that is of interest - the focus is not on "sad content, but compellingly powerful and meaningful representation of human experience" (85). She observes that, in the domain of video games, whilst there are some experiments with instilling emotion in viewers, these are not yet complex structures of feeling; she calls, therefore, for the development of computational experiences that constitute "compellingly powerful and meaningful representation of human experience", highlighting the crucial importance of affect. By further extending this idea into the territory of worlding, it becomes apparent that structures of feeling are essential for creating worlds that engage in resistance, and identify Murray's call as a core element on the agenda of worlding.
Today, we are already seeing experiments in ‘knowing’ networks emerging - we'll circle back to Cheng here, who seems to have stablished a practice of conceptually diagramming his work on BOB (Bag of Beliefs) - one that does not simply relate input to output or technically map, but also pays attention to producing a cartography of the affective relations scripted into BOB's world. By showing increased tendencies towards engagement with not only the network itself, but also the networking, Cheng traverses the crucial space between the perceived (the immediate) and the perceptual (the more esoteric, affectively charged circulations of data within a system), as seen in the examples of Figures 2 and 3, which do not seek to formally capture the elements of a network assemblage, but rather, to create a “topological surface” (Massumi 751) for the experience of that world.
As Munster inflects, the goal is “not to abstract a set of ideal spatial relations between elements but to follow visually the contingent deformations and involutions of world events as they arise through conjunctive processes” (5) - in Cheng’s diagram, we see a phenomenological and epistemological topology of the networking processes at play, where affective relations are beginning to be mapped alongside algorithmic diagramming - in the spaces between memory, narrative and desire, a spectrum of relational flows and possibilities emerge. Demonstrating the essence of the network through its flow of relations, Cheng attempts to diagram the simulation across both affective and technical scales.
Thinking with (rather than simply through) worlding, can, therefore, produce an affective networked epistemology where an increased attention to relationality can cultivate new ways of both seeing and understanding beyond the purely machinic. A question of scale emerges here: how do affective and technological scales become intertwined within computer-mediated worlds? When thinking-with worlds, care needs to be taken to address the affective scale along the technical one - how do these scales have the potential to affect one another and the much larger scale of human experience? This vector of research constitutes a significantly larger line of enquiry, one that I will delegate to worlding's future research agenda - for now, I'll return to Murray's note on computers and tears and ask: could worlds make us cry?
Rendering resistance: the emergence of minor worlds
In an age of anxiety underscored by invasive politics and ubiquitous algorithmic megastructures, the major technologies of the present such as artificial intelligence, game engines, volumetric rendering software and networked systems are employed in the service of extractive and opaque practices. However, as Foucault proclaims, “where there is power, there is resistance” (95): when dislodged from their socio-economical frameworks and taken amidst the ruins of the same reality, crumbling under the weight of late techno-capitalism, these technologies can also become an instrument of dissent - to simulate a world volumetrically, epistemologically and relationally becomes an exercise in (counter)utilising the major technologies of the present in order to produce tactics that lead out of these ruins and into a future dominated by new, pluralistic, decentralised and distributed agencies taking shape according to “ecological matters of care” (Puig de la Bellacasa, 24).
To resist, here, means to engage with the broader questions of power and refusal within the context of software practices. Within practices of worlding, this refusal of capitalism’s master narratives in favour of imagining otherwise takes shape through a more-than-human entanglement with technologies that are capable of procedurally rendering a glimpse into alternative modes of being through simulation. As LeGuin proposes, technology can be dislodged from the logic of capitalism and refigured as a cultural carrier bag (8); in this sense, she envisions this refiguration as a catalyst for a new form of science fiction, one that becomes a strange realism, re-conceptualised as a socially-engaged practice concerned with affective intensity and multiplicity. Parallel to LeGuin, Nichols also reflects on the tensions between “the liberating potential of the cybernetic imagination and the ideological tendency to preserve the existing form of social relations” (627). Nichols argues that there are inherent contradictions embedded within software systems, emerging from the dual ontology of software as both a mode of control and a force that enables collective utterance and deterritorialization; he writes of cybernetic systems:
If there is liberating potential in this, it clearly is not in seeing ourselves as cogs in a machine or elements of a vast simulation, but rather in seeing ourselves as part of a larger whole that is self-regulating and capable of long-term survival. At present this larger whole remains dominated by arts that achieve hegemony. But the very apperception of the cybernetic connection, where system governs parts, where the social collectivity of mind governs the autonomous ego of individualism, may also provide the adaptive concepts needed to decenter control and overturn hierarchy. (640)
Both LeGuin and Nicholson's perspectives propose a seizing of the means of computation against today’s structures of control - this line of thinking is closely aligned with Deleuze and Guattari's theorising of a “minor literature” (16) - firstly outlined in relation to literature in their book Kafka: Towards a Minor Literature, their understanding of 'the minor' is presented through an analysis of Kafka's literary practice. It is important to note here that the idea of the minor is not utilised by Deleuze and Guattari to denote something small in size or insignificant, but rather the minor operates in a politically-charged sense, where it refers to an alternative to the majority: "a minor literature is not the literature of a minor language but the literature a minority makes in a major language" (Deleuze et. al, 16) - as such, the minor becomes a sort of counter-scale emerging within the overarching political, social, economical and technological scales dominating society.
Deleuze and Guattari further trace the contours of three characteristics of minor literature: the deterritorialization of language, the connection of the individual to a political immediacy, and the collective assemblage of enunciation. They identify these three conditions as being met in both the content and the form of Kafka's work: Kafka was himself being part of minority within the context of World War II Germany (through his Czech ethnicity and Jewish belief) and therefore was using the majority language of control (German) to produce literature that gave a voice to the marginalised perspectives of those pushed at the fringes of society. Kafka’s work, therefore, becomes an example of how a minority can de-territorialise a mode of expression and use it to affirm perspectives that do not belong to the overall culture that they are inhabiting. The form of Kafka’s work was also minor in structure, which Deleuze and Guattari identified to be networked, claiming that it was akin to "a rhizome, a burrow" (Deleuze et. al, 1) – the quality of being minor, therefore, does not only involve using master frameworks to express alternative views, but can also include exploring other formats of engagement that are distributed and non-linear. Furthermore, Deleuze and Guattari also highlight the transformative power of a minor literature by way of affective resonance specifically, identifying affect as a core element within minor practices.
Perhaps the best way to analyse the concept of the minor as it emerges today is to situate it within the context of resistant technologies. I ask, therefore: what could be a minor tech?
The concept of a minor literature suggests that a re-purposing of a majority language into a minor one can be a powerful method for subversion and resistance against dominant structures of power. Minor literature emerges within marginalised communities that hold other beliefs to those of the major culture that they live in, offering alternative narratives through the deterritorialization of major languages into collective modes of expression that challenge dominant discourses.
A minor tech, then, would be a technology that is deterritorialised – destabilised from its original position and moved into a new territory of possibility; because minor tech exists within a far narrower space than majority tech, everything within it becomes political; and finally, it presents collective value – the latter, to Deleuze and Guattari, is not necessarily ascribed to the collaboration of several individuals for the production of minor language, but rather to the collective value that minority artwork holds; they further highlight the fact that, conceptually, there are insufficient conditions for an individual utterance to be produced in the context of the minor (whilst Big Tech has increased ability to cultivate talent, individualism and mastery, as well the access to high-end tools, minor tech follows a model that doesn't adhere to the existing patterns of the major and often involves DIY, hacking, self-taught methods and collective sharing of knowledge). Minor tech, therefore, becomes cumulative through this sense of the collectivity forming at the core of its production, which generates active solidarities across communities, practitioners and artefacts - a solidarity that cements itself as a collective utterance.
Similarly, the turn towards rendering minor worlds is enabled by the recent deployment of game engine technologies towards critical digital experimentation, enabling artists to produce increasingly complex digital artefacts. Whilst game engine themselves are readily accessible, the majority practices that we can identify have an industrialised, large-scale approach to utilising these, which involves multiple teams working across the production of software in a distributed way, oftentimes split between programmers, who create a game’s system, and designers, who produce assets –this approach is perhaps best seen in AAA productions, which become “collaborative enterprises” (Freedman). Game engines can therefore be considered a majority technology, deeply intertwined with industrialised production methods geared towards economic value and the production of specific, major models of play. Other, more modest, minor ways of engaging with game engines have emerged as a consequence, ones where, most notably, the organisational split between system and asset (or visuality) disappears –attempts at producing minor games are most notably identifiable within indie development communities, however, we can also note the recent emergence of a minor practice concerned with seizing the means of rendering for the purposes of critically exploring more-than-human worlds.
Consequently, we see the emergence of collective efforts to utilise game engines critically within a context of techno-artistic practice, where the technology becomes minor through its harnessing towards the production of minor worlds, where the entertainment-focused properties of commodified games are replaced with experimental assemblages and their affect constellations. Attentive to the properties of a minor language formulated by Deleuze and Guattari, today’s turn towards the production of virtual worlds as sites of alternative possibilities is reterritorializing the existing entertainment-centric and economically-driven mode of existence of immersive game productions. Within the parameters of the game engine itself, the various features, interfaces and functionalities of mainstream game design software, which are geared towards competitive ludic productions, become subverted or dislodged from their privileged status.
When the majority language of the game engine is deployed into the minor territories of experiment and social critique, the connection of the audience with political immediacy is facilitated through the experimental readings that are enabled via computational speculation. As Haraway reminds us, dissent needs “other stories of solace, inspiration and effectiveness” (2016, 49). Pushing beyond the transformation of given content into the appropriate forms expected of major games, these worlds take shape within the territory of the minor, where experimental and non-linear formats that operate in networked and multifaceted ways become materialized. Following in this line of thought, a minor world aims to disrupt established norms and open up new possibilities for social and political transformation - Deleuze positions the minor relationally, claiming that it has "to do with a model – the major – that it refuses, departs from or, more simply, cannot live up to" (Burrows and O’Sullivan, 19).
The emergence of minor worlds, therefore, poses relevant questions about the ways in which collaborating with machines gives rise to practices of techno-artistic resistance that seek decolonial, anti-capitalist and care-driven ways of being. When applied to practices of worlding, the concept of the minor highlights the collective agency of artists in constructing alternative worlds that challenge dominant narratives and ideologies - minor worlds represent a rupturing with the ordinary regime of the present through their undoing and reassembling of the operative logic of reality. Their use of algorithmic processes and tools such as game engine technologies or machine intelligence can result in radically different modes of existence from those dictated by the cultural narratives of capitalism. As Deleuze and Guattari infer, minor practices provide “the means for another consciousness and another sensibility” (17).
One example of envisioning another sensibility through a refiguration of more-than-human relationships can be found in Sahej Rahal’s work Antraal, which explores what it would mean to live as the final humans, now turned into a-historical machines that roam the Earth. In this work, a virtual biome shows strange-limbed non-human actors roaming a video game simulation, operated by artificially intelligent algorithms that act counterintuitively to one another. Marred by the paradoxes scripted in their code, these beings exhibit chaotic behaviours as their machine intelligences struggle, their ontologies lying far outside human-centred thought capabilities - we can see or hear what they are, but we can only assume what they might be. As Negarestani observes, these last humans ‘have refused and subverted the totality of their contingent appearance and significance of their historical manifestations as mere misconceptions of what it means to wander in time, as an idea and not merely a species’ (24), existing in a state that refuses the current epistemological framework of humanity. Rahal's use of video game engines and artifical intelligence allows for thought to be casted speculatively, into a future where existence is dislodged from today's temporal and ontological frameworks and re-established according to different parameters.
Another experiment in exploring more-human alliances take shape in the work of Jenna Sutela, via the project nimiia cétiï, which envisions a language existing outside the master parameters of human expression by deploying intelligent algorithms in the role of a medium that co-interprets data from the Bacilus subtilis bacteria, said to be able to survive on Mars, with recordings of Martian language received from the spirit realm by the by the French medium Hélène Smith. Zhang points out that “Sutela channels the language of the Other to muddy the waters of human sapience, reminding us in synthetic, spiritual and alien tongues that we hold a monopoly over neither intelligence nor consciousness” (154) - nimiia cétiï is, in essence, a minor language that is at once an exploration in seeking other modes of expression and a vestige to the possibilities that lay beyond the frameworks of language cultivated throughout human history.
Both previous examples stand as visions projected from outside our Anthropocentric moment – they refuse the current narratives and knowledge systems of capitalism and attempt to use intelligent technologies or game engines to explore what a more-than-human assemblage could look, sound or ultimately feel like. In this convergence of artistic practice, software and politics, worlding through algorithms offers a pathway towards ways of being and knowing otherwise, through a re-purposing of the majority of computational and algorithmic tools surrounding us today into a minor language, able to render affective world instances. As Kelly observes, these artists ‘embrace technological development in their lives and work, but in a manner that is cognisant and critical of the frameworks that have developed within the tech industry’s supposed focus on human-centred advancement, which is inevitably driven by the demands of capital’ (4). Worlding, therefore, becomes a political act that aligns with the principles of minor literature in terms of its transformative potential. It invites us to challenge dominant modes of representation, question established boundaries, and imagine new possibilities. By constructing alternative worlds, these artists aim to challenge dominant narratives, ideologies of power, and structures of control and prompt audiences to envision different social, cultural, and political realities.
Conclusion
To conclude, we can begin to acknowledge that practices of worlding emerge as dynamic forces concerned with reshaping our understanding of technological, cultural and political structures. By harnessing the power of the majority tech operating in society, artists engage in a process of world-making that transcends traditional boundaries and opens up new possibilities for creative expression and political resistance. Drawing on the concept of a minor literature put forth by Deleuze and Guattari, we can situate worlding as a politically charged act of subversion and empowerment, by understanding it as minor practice in relation to the majority (or master) structures and narratives that perpetuate inequality, injustice, and oppression. Moreover, the harnessing of algorithmic technologies for speculatively rendering worlds can provide a fertile ground to explore modes of being otherwise, through the creation of immersive and interactive experiences of a different lifeworld, thus enabling artists to engage audiences in critical reflections on power dynamics, social hierarchies and more-than-human alliances.
Worlding disrupts the established order of things by refusing dominant narratives and offering counter-hegemonic visions of the world - it gives voice to other, more-than-human perspectives and challenges oppressive power structures - as Kathleen Stewart puts it, worlding allows for “an attunement to a singular world’s texture and shine” (340), an ability to not only envision , but relationally tune into a space of possibility, to hold open a portal into another cosmology. In this way, worlding becomes a form of resistance, enabling the creation of alternative realities and fostering the potential for social transformation through inviting audiences to critically engage with new possibilities for social or ecological change.
So, I close with a question, which sets up my research agenda: how can we situate and conceptualise these acts of worlding through an understanding of their relationship with software and affect, and how can the resulting networked epistemologies shape a politics of worlding in tune with what Zylinska defines as a minimal ethics for the Anthropocene?
Works cited
Anable, A. “Platform Studies.” Feminist Media Histories, vol. 4, issue no. 2, 2018, pp. 135-140.
Andersen, Christian Ulrik, and Geoff Cox. "Toward a Minor Tech". A Peer-Reviewed Newspaper, edited by Christian Andersen and Geoff Cox, vol. 12, no. 1, Apr. 2023, p. 1.
Bellacasa, María Puig de la. Matters of Care: Speculative Ethics in More than Human Worlds. University of Minnesota Press, 2017.
Braidotti, Rosi. Posthuman Knowledge. Polity Press, 2019.
Burrows, David, and Simon O’Sullivan. Fictioning: The Myth-Functions of Contemporary Art and Philosophy. Edinburgh University Press, 2019.
Cadena, Marisol de la, and Mario Blaser, editors. A World of Many Worlds. Duke University Press, 2018.
Cheng, Ian. BOB: Bag of Beliefs. Simulated lifeform, 2018-2019.
Cheng Ian. BOB Shrine. Software Application, Version 1.7, Metis Suns, 2019.
Cheng, Ian, et al. Ian Cheng: Emissary’s Guide to Worlding. 1st ed., Koenig Books and Serpentine Galleries, 2018.
Cheng, Ian. ‘Worlding Raga: 2 – What Is a World?’ Ribbonfarm, 5 Mar. 2019, https://www.ribbonfarm.com/2019/03/05/worlding-raga-2-what-is-a-world/.
Chun, W.H.K. (2004). “On Software, or the Persistence of Visual Knowledge.” Grey Room, no 18, Winter 2004, pp. 26-51.
Deleuze, Gilles, et al. "What Is a Minor Literature?". Mississippi Review, vol. 11, no. 3, 1983, pp. 13–33. JSTOR, https://www.jstor.org/stable/20133921.
Deleuze, Gilles, and Felix Guattari. Kafka: Toward a Minor Literature. First Edition, vol. 30, University of Minnesota Press, 1986.
Fisher, Mark. Capitalist Realism: Is There No Alternative?. Zero Books, 2012.
Foucault, Michel. The History of Sexuality. Volume I, Vintage Books, 1978.
Foxman, Maxwell. "United We Stand: Platforms, Tools and Innovation With the Unity Game Engine". Social Media + Society, vol. 5, no. 4, Oct. 2019, https://doi.org/10.1177/2056305119880177.
Freedman, Eric. "Engineering Queerness in the Game Development Pipeline". Game Studies, vol. 18, no. 3, Dec. 2018, https://gamestudies.org/1803/articles/ericfreedman.
Gibson, William. Pattern Recognition. G.P. Putnam’s Sons, 2003. https://archive.org/details/patternrecogniti00gibs/.
Gregg, Melissa and Seigworth, Gregory J. The Affect Theory Reader. Duke University Press, 2010., https://doi.org/10.1515/9780822393047.
Haraway, Donna J. "SF: Science Fiction, Speculative Fabulation, String Figures, So Far". Ada: A Journal of Gender, New Media, and Technology, no. 3: Feminist Science Fiction, November 2013. DOI:10.7264/N3KH0K81
Haraway, Donna. Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press, 2016.
Keiken. BET(A) BODIES. Haptic wearable womb, 2021.
Kelly, Miriam. “Feedback Loops”. Feedback Loops, ACCA Melbourne, 2020, pp. 22-26.
LeGuin, Ursula K. “The Carrier Bag Theory of Fiction”. Dancing at the Edge of the World: Thoughts on Words, Women, Places, Ursula K. LeGuin, Grove Press, 1989. pp. 165 – 171.
Marino, Paul. The Art of Machinima: Creating Animated Films with 3D Game Technology. 1st edition, Paraglyph Press, 2009.
Massumi, Brian. “Deleuze, Guattari, and the Philosophy of Expression”. Canadian Review of Comparative Literature/ Revue Canadienne de Littérature Comparée. Sept. 1997, pp. 751–783. https://journals.library.ualberta.ca/crcl/index.php/crcl/article/view/3739.
McPherson, Tara. ‘U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX’. Race After the Internet, Routledge, 2011.
Munster, Anna. An Aesthesia of Networks: Conjunctive Experience in Art and Technology. MIT Press, 2013, https://doi.org/10.7551/mitpress/8982.001.0001.
Murray, Janet. "Did It Make You Cry? Creating Dramatic Agency in Immersive Environments". Virtual Storytelling. Using Virtual Reality Technologies for Storytelling, edited by Gérard Subsol, Springer, 2005, pp. 83–94. Springer Link, https://doi.org/10.1007/11590361_10.
Murray, Janet. “Glossary”. Humanistic Design for an Emerging Medium. 20 May 2023. https://inventingthemedium.com/glossary/.
Negarestani, Reza. “Sahej Rahal: A Life That Wanders in Time”. Feedback Loops. ACCA Melbourne, 2020, pp. 22-26.
Nichols, Bill. “The Work of Culture in the Age of Cybernetic Systems”. Screen, Volume 29, Issue 1, Winter 1988, Pages 22–47, https://doi.org/10.1093/screen/29.1.22.
Palmer, Helen and Hunter, Vicky. “Worlding”. New Materialism: How Matter Comes to Matter, 2018, https://newmaterialism.eu/.
Rahal, Sahej. Antraal. Simulated biome, 2019.
Stengers, Isabelle. In Catastrophic Times: Resisting the Coming Barbarism. Open Humanites Press, 2015. http://www.openhumanitiespress.org/books/titles/in-catastrophic-times/.
Stewart, Kathleen. "Afterword: Worlding Refrains". The Affect Theory Reader, Duke University Press, 2010, pp. 339–54. https://doi.org/10.1515/9780822393047-017.
Sutela, Jena, Akten, Memo and Henry, Damien. nimiia cétiï. Speculative audio-visual work, 2018.
Zhang, Gary Zhexi. “Jenna Sutela: Soult Meat and Pattern”. Magic, edited by Jamie Sutcliffe, Co-Published by Whitechapel Gallery and The MIT Press, 2021, pp. 153-156.
Zylinska, Joanna. Minimal Ethics for the Anthropocene. Open Humanites Press, 2014. pp. 20.
Jung-Ah Kim
Weaving and Computation:
Can Traditional Korean Craft Teach Us Something?
Weaving and Computation: Can Traditional Korean Craft Teach Us Something?
Abstract
This essay explores the intersection of computation and traditional craft, focusing specifically on weaving and the Korean traditional woolen carpet, modam. While both weaving and computers operate in binary terms, the essay acknowledges that weaving encompasses more than just binary logic, considering factors such as materiality, embodiment, and imagination. It seeks to explore the deeper connection between weaving and computation, beyond specific devices like punched cards, and how modam and its cultural context can shed light on this relationship. The essay also highlights the historical role of women in both weaving and computing, drawing parallels between weavers and the (gendered) body as components of early computational processes. By examining the historical, cultural, and technological nuances of modam production, this exploration aims to reveal insights into our present technology and our interaction with it.
Introduction
Recently, I encountered modam, a Korean traditional woolen carpet, for the first time in my life at the Textile Museum of Canada. I visited the museum’s opening of Gathering, a new exhibition that features 40 pieces from the museum’s permanent collection of over 15,000 objects from around the world. There were open calls for artists to make digital responses to their collection which led me to find modam in their collection and make a small video about its history, and how the practice slowly disappeared. Not only was I happy to see my work displaying side by side with the modam, but I was also taken by the beauty and the magnitude of the object itself. I had only seen it in digital scans and not in reality, so I was at first astonished by the sheer size of the tapestry. Due to its length being greater than the height of the gallery wall, only 2/3 of the tapestry was visible as it was hung on the wall. Therefore, the visual elements of the tapestry were much larger than I expected, in which the central crane was the size of a large rabbit or a medium-sized dog that gave me the illusion of flying right into my face. While I already have numerous questions and curiosities regarding various aspects of the carpet and its arrival in Canada, its size has sparked another significant question in my mind: "What was the purpose behind creating such a large carpet?"
The practice of Korean tapestry remains relatively unknown, even among many Koreans themselves. In fact, there is a common misconception among Koreans that carpets were solely imported from the West, without realizing that traditional carpets were once crafted within our own culture. This is presumably because the rapid industrialization of textile production has led to cultural amnesia and the marginalization of traditional crafts in Korea. As a result, many of the traditional ways of textile production have been forgotten and have fallen out of practice. I’m not an exception to this cultural amnesia and had I not come across the carpet in the Textile Museum of Canada, I would have remained unaware of this fascinating tradition as well. However, records show that patterned wool carpets have existed in Korea since the Three Kingdom Period (57 BCE – 668 CE) and were actively produced during the Joseon dynasty (1392-1910). (Paintings in Thread MODAM 30) The production of modam decreased in the 17th century arguably because, by then, ondol, the traditional Korean underfloor heating system started to be widely supplied in households and people no longer needed carpets to insulate the floor. (Paintings in Thread MODAM 32) No carpets from the early Joseon period have survived, and there are more than 100 remaining from the late Joseon period (16th-19th century) in the world. (Paintings in Thread MODAM 29) Recently, there has been an effort to introduce modam to the public and research them in a few Korean museums such as the Kyungwoon Museum and Daegu National Museum. They held exhibitions of modam in 2016 – 2017 and 2021 respectively.
Weaving, the process of interlacing threads to create fabric, has a rich history that traces back to ancient civilizations such as Egypt, Mesopotamia, and China. While weaving is often associated with textiles and fashion, its contributions to the history of technology are significant. From the development of ancient looms to the modern advancements in textile machinery, weaving has played a crucial role in shaping technological progress and societal development. A significant contribution of weaving to computation technology was the introduction of the punched card-controlled Jacquard loom in the early 19th century. Therefore, the discussion surrounding the involvement of weaving in the advancement of computation has predominantly centred on the importance of the Jacquard loom and the use of punched cards. However, I would like to explore a broader perspective, examining how weaving's influence on computation extends beyond the Jacquard loom. I am particularly intrigued by the involvement of traditional weaving and human labor in the development of computation, with a specific emphasis on exploring the potential contributions of Korean traditional weaving practices and devices that produced objects such as modam.
This essay begins with the familiar narrative surrounding the Jacquard loom and its significant impact on the history of computing through the use of punched cards. Then it discusses how weaving has been a binary art form since its beginning and highlights recent discussions that emphasize the broader scope of weaving beyond these specific devices and binary logic. I introduce different aspects of modam, the Korean traditional woolen carpet about its history, disappearance, production method etc. Lastly, I explore approaches to incorporate these aspects to consider what we could learn from the traditional Korean weaving.
Punched cards system in Analytical Engine & Tabulating machine
Whether we start the history of computing with Charles Babbage’s Analytical Machine or Herman Hollerith’s Tabulating machine, it is important to note that both machines used punched cards as a form of information storage and/or automatic control. Punched cards played an important role in computing history and were regularly used to program computers until the 1960s.
Hollerith’s tabulating machine used a method of storing information coded as holes punched onto card stock. These cards, made of paper and featuring a grid-like structure, allowed data to be encoded by punching holes in specific locations. For instance, marital status could be represented by a series of holes on the card. When a person marked as married punched out the corresponding spot, the card would be inserted into Hollerith's machine. Metal pins would descend over the card, passing through the punched holes and into small vials of mercury, thus completing the circuit. This completed circuit would then power an electric motor, causing a gear to increment the 'married' count by one. The concept of using hole or a non-hole to represent and store data on paper cards, such as distinguishing between married and unmarried, anticipated information stored in digital form.
Babbage’s Analytical Engine used punched cards as a control function. The concept of automatic control, the ancestor of what we now call software, is as important as the information storage to make up a computer. Mechanical control can be traced back to antiquity, to a device that had been used to control machinery for centuries: a cylinder on which were mounted pegs, which tripped levers as it rotated. (Ceruzzi 8) Babbage’s Analytical Engine was to contain a number of such cylinders to carry more detailed sequences of operations that are directed by the punched cards. Today we might call it the computer’s microprogramming, or read-only memory (ROM) (Ceruzzi 9). Analytical Engine used punched cards for programming the machine by providing three types of cards. His operation cards held instructions for the engine. The variable cards carried symbols and values of variables in equations as well as constants. And his number cards supplied numbers for tables and logs. Like a modern-day computer, the Analytical Engine could make decisions based on its own calculated results; it could do branching, loops or subroutines (Poague 17). Although never fully constructed, Analytical Engine was an ‘automatic computer’ that could guide itself through a series of operations automatically, which foreshadowed computer programs. English mathematician Ada Lovelace wrote hypothetical programs for the Analytical Engine. For this work, she is considered the world’s first programmer. Ada Lovelace was the main collaborator of Babbage’s Analytical Engine who is also known for her famous quote, “It will weave algebraic equations the way a Jacquard loom weaves flowers.” (Poague 16) Lovelace applied her mathematical imagination in envisioning the potential of Babbage's Analytical Engine. She explored the idea of the machine being capable of performing various tasks beyond mere calculations. (O’Shea 121)
Jacquard Loom, before the Analytical Engine & Tabulating machine
Babbage’s invention was based on the punched card system and the formal mechanics of the Jacquard’s loom, an automated weaving loom that used a series of punched cards to create complex patterns more economically. The Jacquard loom was patented in 1804 by the Frenchman Joseph-Marie Jacquard, who implemented punched cards to control the weaving of cloth by selectively lifting threads according to a predetermined pattern (Ceruzzi 8).
The principle of weaving revolves around the movement and positioning of two essential groups of threads: the warp and the weft. The warp refers to the set of vertical threads that are held taut on a loom. These threads can be in one of two positions: up or down, also referred to as front or back. The position of the warp determines the path the weft will take during the weaving process. The weft, on the other hand, represents the horizontal threads that interlace with the warp to create the fabric. The weft thread travels either over or under the warp threads, depending on their respective positions. When the warp is up, the weft will go over it, and when the warp is down, the weft will go under it. The Jacquard loom incorporated a system of punched cards to effectively control the positioning of the warp threads. The process of making a fabric on a Jacquard loom involves a number of steps, including the making of the pattern by hand and transferring it on a checkered point paper (which becomes the “pixel resolution” of the final image), translating the design onto the punched cards, threading the loom (passing each warp thread through the heddles), and the actual weaving process (Fernaeus, et al. 1596). The key feature of this process and the invention of Jacquard loom is again the use of punched cards where fabric patterns are represented in the form of holes and the absence of holes in a long chain of punched cards stitched together (Fernaeus, et al. 1597). When the stitched cards are fed into the loom in a continuous belt, each card comes in contact with the needle board and is pressed against it. The needles that pass through the holes remain in the same position whereas all other needles would be pushed back. In turn, particular heddles that correspond to the needles that stayed in place would be raised, while other heddles would not. In short, the punched holes in each card control which warp threads to be raised per shed, thus creating the weaving pattern. The mechanics of the punched cards could be regarded as the binary representation, making it possible to ‘digitize’ material objects, creating a form of ‘code’ only possible to interpret by running it through a mechanical device. It is in this sense the Jacquard loom is often discussed as being a predecessor of the modern-day computer (Fernaeus, et al. 1597).
From the standpoint of loom technology, Jacquard loom completed and perfected the mechanism that automated the loom using punched cards. However, the binary control using holes and non-holes already existed in previous efforts such as Basil Bouchon’s invention in 1725 that used a band of perforated paper tape, Jean Baptiste Falcon’s invention in in 1728 that introduced a loop of punched cards, and Jacques de Vaucanson’s invention in 1745 which was the first automated loom. Jacquard did not invent the binary structure of weaving, let alone the punched card system. What he did was construct the first feasible and widely used mechanism that replaced the human being (so-called drawboy lifting the warp threads on behalf of the weaver thus controlling the weave pattern) with the punched cards to feed in the pattern information.
Digital nature of weaving
However, the connection between weaving and computers cannot be reduced to the role of punched cards. As a computer scientist and a weaver, Martin Davis and Virginia Davis aim to correct the misconception of the Jacquard loom as the ancestor of computers. They argue that the Jacquard loom is no more like a computer than a player piano is, which also operates on punched holes as an input device. Punched cards are only the peripheral device that brings data into or out of the machine which should not be taken for the computer itself (Davis and Davis, 79).
Weaving and digital computers process data in similar ways regardless of the punched cards because to weave means to decide whether a warp thread is to be picked up or not. Therefore, weaving has been a binary art from its very beginning as also stated by the computer pioneer Heinz Zemanek (Harlizius-Klück 179). When referring to the prehistory of processing information, Zemanek states that each crossing of two threads means a digital point (Zemanek 16; Harlizius-Klück 183). When we speak of representing data in weaving as 1s and 0s, or in binary terms, we’re speaking of the interlacements that occur when a warp thread is raised, thus covering the weft thread, or not raised, thus covered by the weft thread. The holes on the punched card merely represent which warp threads to be raised.
Ellen Harlizius-Klück intends to widen the view that seems to be fixed upon the Jacquard mechanism. Her article sheds light on the algebraical patterns and codes of weaving that were already present before the Jacquard loom. The punched cards made the pattern algebra of weaving perceivable to someone interested in the construction of calculation engines based on binary logic, like Charles Babbage. (Harlizius-Klück 179) She argues that a sort of algebra is already involved in operating shafts (movable frames or sets of heddles that control the position of warp threads) or heddles (cords or wires attached to a loom's shafts that hold and control the individual warp threads) in ordinary looms. This algebra was executed as a tacit inference until the first weaving notations were developed, and these weaving-notations resemble the respective loom parts and make the tacit visual algebra of patterns recognizable to non-weavers and in particular, inventors and engineers. (Harlizius-Klück 179) For millennia, pattern weaving was done without notation. Skilled weavers did not make plans in advance, developing each and every step of the process and documenting these single steps in writing. The loom parts, like heddles or shafts, store most of the necessary information and skilled weavers can read bindings and patterns directly from fabric. In this sense, fabric samples were the best and most commonly used memory or storage of patterns (Harlizius-Klück 183). However, the development of pattern notation printed and published made recognizable the tacit algebraic thinking that was already involved in operating shafts and heddles in ordinary looms (Harlizius-Klück 179). Weaving notations revealed algebraic ways to organize threads in groups and subgroups, and how to code the pattern using the loom setup, facilitating the understanding of the interaction between pattern drafting and loom parts for non-weavers. This enabled engineers and inventors to play around with the mechanisms and make attempts at the automated loom (Harlizius-Klück 192). Birgit Schneider, in her article, “Programmed Images: Systems of Notation in Seventeenth- and Eighteenth-century Weaving” overviews weaving as technical image processing. She questions whether the first printed weaving notation could be used as data fed into a control mechanism on the loom. She identifies a precursor for technical image processing in the notations written and published in 1677 by Marx Ziegler, a weaver from Ulm, Southern Germany. These notations encoded images through the arrangement of threads and the tie-ups, which represented the geometric properties of the pattern. (Schneider 143) She is interested in weaving notations from the viewpoint of the prehistory of technical image processing and image coding. (Harlizius-Klück 191) The close connection of code or design and loom construction was also stressed by Hilts: “Loom-controlled pattern weaving is a distinct branch of design in which art and technology are closely interrelated.” (Harlizius-Klück 191)
The true significance and emphasis reside in the ancient practice of weaving and its profound connection to mathematics, emphasizing its inherently digital nature, rather than solely focusing on the Jacquard loom. It is essential to recognize and appreciate the inventive and skillful work performed by weavers on a daily basis, which should not be overshadowed by new tools and inventions. Heinz Zemanek supports this notion, highlighting that various folkloristic weaving devices found across Europe, Africa, and Asia are, in fact, implementations or tools for programmed processes (Zemanek 16; Harlizius-Klück 183). This perspective helps open the door to exploring traditional weaving techniques in non-western regions. It underscores the notion that people, with their expertise and methodical actions, acted as pattern-processors long before the introduction of punch-cards. The roots of computation lie not in some specific device but rather in the disciplined labor of human beings. In this context, I am particularly intrigued by how the production of modam could also serve as a technology that enables us to gain a deeper understanding of this connection.
Beyond the digital
However, in their article "Weaving Beyond the Binary," John Paul Morabito explores weaving in a way that goes beyond its disciplinary boundaries and the strict technical aspects it is often associated with. While it is acknowledged that weaving on a loom involves binary logic, the digital aspect is just one of many paradigms encompassed by the practice (Morabito 4). Narrow definitions that reduce weaving to binary overlook the multitude of factors involved. The author seeks to unlock the potential found in the materiality, embodiment, and imagination inherent in weaving. Factors such as scale, length, and width introduce considerations that go beyond binary choices. Variables like color, fiber composition, and texture further expand the possibilities, not constrained by a binary framework. Even the interlacing of threads can be expanded beyond the binary when we shift our focus to the movements within the cloth itself, going beyond the movements dictated solely by the loom (Morabito 5). The author explores multi-layered weaves, such as double, triple, or quadruple weaves, where the cloth offers far more options than a binary system allows. Creating multilayered cloths requires a weaver to consider both the binary movement of the loom and the intricate movements of threads within and between the different layers (Morabito 5). This highlights that weaving is polynary, not binary, referring to phenomena composed of more than two parts. While binary thinking presents an either-or battleground, polynary thinking presents a playground (Morabito 4). Polynary thinking becomes evident when we look beyond the Jacquard loom and emergent technologies, instead focusing on ancient looms where one action sets the conditions for a new set of activities. Ancient and embodied weaving technologies offer a more expansive understanding of weaving that surpasses the categorization of weaving as a rigid space (Morabito 5). The exploration of warp-weighted weaving and the backstrap loom, contributed by Emma Cocker and Jenni Sorkin, is particularly intriguing in this context. In warp-weighted weaving, the process begins with a tablet-woven band that is then rotated to initiate a new weaving. The elongated wefts extend outward to eventually become the warp, and the tablet loom serves as the scaffold for the next weaving, allowing the textile to grow in any direction, defying the linear progression of modern weaving techniques (Cocker 130; Morabito 5). This article is significant as it offers a comprehensive perspective on weaving that transcends the limitations of binary logic. By challenging binary thinking, it has the potential to prompt a re-evaluation of computation itself.
These sources offer compelling insights into why delving into modam and its traditional weaving method and practice may deepen our understanding of its potential connection to computation, irrespective of whether it is connected to binary logic or not. There is a potential to bring forth a traditional perspective and explore alternative modes of computation that go beyond the conventional device-oriented binary paradigm. While some historical facts and production elements of modam have been explored to some extent, there is still much more to uncover and reveal about this subject. Further research and exploration can unlock its potential as a unique and culturally significant approach to computation and enrich our understanding of technological innovation from a more inclusive and diverse perspective.
Modam, traditional Korean woolen carpet
From fragments of woolen fabric found in ancient relics of the Gojoseon period (? – 108 BCE), we can tell that Korea has a long history of woolen textile practice. The earliest known example of woolen fabric is a face veil that was woven with a mixture of sheep wool and dog hair, dating back to the Gojoseon period. Fragments of woolen fabrics from the 1st to 2nd century have also been discovered in Pyeongyang. Therefore, it is confirmed that ancient Koreans had the technology to spin animal fur and weave woolen fabric. Records show that woolen textiles to spread on the floor such as ‘mosuk’ or ‘moyok’ have been produced from the Three Kingdoms Period (57 BCE – 668 CE) to the Joseon period (1392-1910). (Moon 18) ‘Modam’ in various records have different names, however, it is generally made from animal hair and was used not only to spread on the floor but also to hang as canopy. It appears that it was decorated with dyed threads or painted with patterns. Modam was considered a valuable and luxurious item, and it was traded as a commodity with China and Japan from the Three Kingdoms period to the Joseon Dynasty. Furthermore, it is evidenced by archival photographs that modam was also used by the general public in later periods. (Moon 18) It has been confirmed that there exist more than 100 pieces of modam artifacts domestically and abroad. Some of them are housed in the Seoul Craft Museum, Sookmyung Women's University Museum, Onyang Folk Museum, etc. in Korea. Others that transmitted to Japan as ‘Joseonchul’ exist in Kyoto Gion Foundation and private collections. (Moon 19)
Classification of Modam by its production method
The modam artifacts date back to the 16th to 19th centuries and can be classified into three types such as tapestry, plain weave, and felt, according to their weaving style. However, as time progressed, tapestry techniques decreased in popularity, giving way to a greater prevalence of painted patterns. The combination of weaving style and design techniques includes tapestry alone, tapestry + painting + printing, plain weave + painting, and felt + painting. As the weave structure became less complex, the patterns were more likely to be painted onto the fabric. (Moon 19) 66% of these modams are composed of tapestry with patterns created using the painting or printing techniques. Patterns were created using painting or printing techniques on different textile surfaces. The composition of the design typically consists of a central pattern and a border pattern. The central pattern is usually composed of animals such as phoenixes, lions, tigers, and magpies, as well as flowers such as orchids and plum blossoms, butterflies, and Mountain Hydrangea. The border decoration can be classified into two types: geometric patterns such as diamond stripes, color stripes, palindrome, and Swastika that decorate the top and bottom, and animal and plant patterns such as butterflies, flowers, and birds that decorate the edges. (Moon 20) The tapestry weave structure that takes up the highest percentage of Joseon period’s modam is based on plain weave. However, instead of weft thread passing through the entire width of the fabric, it is partially woven according to the pattern. Fabrics woven in this way have the characteristic of small gaps created in the warp direction because the weft is not continuous. This weave structure is called tapestry in north America, and in countries such as Turkey and Iran, it is called Kilim. (Moon 20) Modam artifacts exhibit more simplified weave structure as time went on, which represents the stylistic changes over time. (Moon 21) The tapestry technique is being phased out in favor of simpler plain weaving, and the pattern creation also shifted from being woven to drawn on the surface. This indicates a gradual progression towards a more convenient environment for production. (Moon 23)
Production method
To weave fabric, the three basic processes of raising the warp, passing the weft through, and beating down the weft are essential. The principles of weaving machines can be accomplished by these three basic processes. This can create a plain weave which is the most basic weave. Primitive weaving involved manually raising some of the warp with hands or using tree branches or bone needles. It is assumed that a weaving machine that embodies these basic weaving principles such as the warp-weighted loom would have been used to produce modam. Weights of Warp-weighted looms made of soil dating back to 2000 BCE have been found in the Korean peninsula. (Moon 71) Warp-weighted looms are ancient forms of looms used to weave woolen fabrics and were especially used in weaving tapestries that are based on the plain weave technique. Warp-weighted loom uses weights to hold the threads tight and parallel, and we have evidence of this type of loom from ancient pottery. (Broudy 23) The loom uses a rod to separate the threads and weights to keep them taut. The weaver creates a shed, or opening, in the threads by using heddles and rests the heddle rod on supports. The weaver then inserts the weft, or horizontal, threads and uses a sword beater to keep them in place. As the weaving progresses, the woven portion can be rolled up on the top beam, allowing for longer fabrics to be made. Heavier weights were used for tighter weaving, while lighter weights resulted in looser weaving. Weavers could also adjust the tension by attaching more threads to the heavier weights and fewer to the lighter ones. The history of the warp-weighted loom is long, and it has been found in many ancient civilizations, including in Anatolia, Palestine, Crete, and Europe. (Broudy 25) The plain weave structure of modam is also found in Korean traditional baskets and mats. The loom utilized to make those baskets and mats has a basic design that primarily functions to hold and tension the warp, with minimal additional components. (Moon 72) The weaving machine for mats currently produced in the Boseong area of Korea is called "jariteul," which is a vertical form of weaving machine. Jariteul has a similar operating principle to the traditional beopteul, such as having a device on the top of the loom for adjusting the tension of the warp. (Moon 72)
From Modam to carpets from the West
The early Joseon period author Seo Geojeong (1420~1488) described the interiors of houses on winter days of Joseon in his book. “Colorful modams are spread on the floor and embroidered curtains are draped around. Charcaol in the furnace blooms red like spring flowers.” (Paintings in Thread MODAM 29) This scene is quite different from the common perception of the living style of a traditional Korean house called Hanok with an ondol heating system. Ondol is traditional Korean underfloor heating system widely supplied by the 17th century. If the interior of a house is heated using ondol, there is little need to spread a thick carpet to spread on the floor. Also, curtains are unnecessary as the air inside the house is kept relatively warm. That is why the interior of a hanok house with an ondol system consists of papered windows and a floor coated with oil paper. (Paintings in Thread MODAM 29) Researchers believe that ondol brought drastic changes to the living culture of Joseon, especially in housing and cooking. It is believed to be one of the reasons as to why the production of modam decreased along with many other factors. (Paintings in Thread MODAM 33) No carpets from the early Joseon period have survived, but their images can be found in portraits of figures in official attire from the seventeenth century. Carpets were no longer depicted in portraits after the 17th century and were replaced by figured rush mats from Ganghwa Island, known as hwamunseok. The next known appearance of a carpet in a portrait comes in the depiction of Yi Haeung (1820-1898) from 1880. (Paintings in Thread MODAM 29) There remain a few extant carpets from the late Joseon period. Recent discoveries of pieces of modam from Changdeokgung Palace’s Seongjeonggak Hall provide clues about the uses and types of modam used in the 20th century royal court. (Paintings in Thread MODAM 37) Additionally, there is evidence that shows the use of modam among the public. In a photo taken by Father Nobert in 1911, modam used in weddings of ordinary people is shown. In the book, Viewing the Joseon Dynasty through the Eyes published in 1986, there are also depictions of women drawing pictures sitting on modam. (Paintings in Thread MODAM 37)
Carpets imported from Europe are found in portraits from the early twentieth century. In the June 19th, 1879 issue of Dongnip Sinmun (Independence Newspaper), an advertisement appeared selling imported carpets by a foreigner named F.Kalitzky who lived in Korea at that time. This marked the introduction of Western style carpets to Korea. (Paintings in Thread MODAM 38) In the late 19th to early 20th century, modam was referred to as yungjeon, dantong, mopo, and yangtanja in newspaper articles and advertisements. These articles and advertisements were about domestically produced carpets and in the 1899 issue of Dongnip Sinmun it was encouraged as a national industry. (Paintings in Thread MODAM 37) In the 1920s and 1930s, there was a noticeable increase in advertisements of workshops that taught women how to make, maintain and sell dantong. This suggests that dantong and yungjeon were domestically produced and were modam that ordinary people used. (Paintings in Thread MODAM 37)
What could Modam teach us?
Despite the existing knowledge revealed about modam, there is still much more that is unknown. Questions arise regarding the people and labour involved in the production process, and the culture surrounding modam that could unveil a deeper understanding of the social context in which it existed. The detailed production procedure, including the tools and materials, can shed light on the craftsmanship and techniques employed by the makers of modam. Examining the correlation between the decline of modam and the widespread use of ondol, the underfloor heating system raises relevant questions regarding technology. This may uncover intriguing connections between objects and space, namely the architectural infrastructure. Exploring the historical export of modam to China and Japan during trade exchanges can offer insight into the cross-cultural significance of this craft. Particularly noteworthy are the carpets transported to Japan during the Joseon Tongsinsa, the Korean Mission to Japan, during the 17th century after the two countries restored diplomatic relations following the Japanese Invasion of Korea in 1592. These carpets decorated the yamaboko carriages used in the celebrated Gion Matsuri festival in Kyoto, which could reveal another intriguing dimension into the production and distribution processes of modam.
What striked me the most about the specific modam housed at the Textile Museum of Canada when I first encountered the object was its sheer size. It is 1.22 meter wide and 3.06 meter long. This calls attention and raises intriguing questions about its purpose and the individuals involved in its creation. Such a substantial carpet would certainly have required collaborative labour, engaging the skills and expertise of numerous individuals. Who were the skilled artisans involved? What was the intended use or significance of this expansive carpet? The production process of modam could tell us something about collaborative craftsmanship that may inform us something about the roots of computation that lie in disciplined and cooperative human labour, rather than solely relying on devices such as punched cards. Lizzie O'Shea’s article, "Collaborative Work is Liberating and Effective," gives some valuable insights to this this notion by delving into the intersections of labor culture in the realms of textiles and computing. She explores the historical context of collaborative work through examples such as Ada Lovelace and Charles Babbage's collaboration on the design of the Analytical Engine and the resistance of Luddites against the separation of craftsmanship and care in favor of labor and wages. O'Shea then delves into the evolution of collective and open software development, tracing its roots in the early hacker culture and its transformation with the rise of proprietary software driven by profit motives. She writes, “Some of our most radical new technological developments were a result of teamwork, drawing on multiple people’s varied skill sets.” (O’shea 131) “Computing began as a small pocket of sophisticated craft labor practiced in a relatively unalienate manner, while the world of capitalist enterprise carried on all around.” (O’shea 131) Drawing on the case studies like the hacker culture in the MIT lab and the Linux community, the article examines the relationship between labor, craftsmanship, collaboration, and capitalist modes of production.
The main contributors to the production of modam are not entirely known but given the advertising of workshops during the 1920s and 1930s aimed at teaching women how to create, manage and market modam, it can be inferred that women played a role in its manufacture. By exploring the production of modam and its associated cultural context, could we uncover insights about gendered labor hidden in technological advancements and/or our relationship with technology and machines? The role of women in the history of weaving and computing has been thoroughly explored in Sadie Plant's work, "The Future Looms: Weaving Women and Cybernetics." In this paper, she delves into the traditional perception of weaving as women's work and highlights the significant contributions women made to the early development of computing technologies. This coincides with the early days of information processing in computation when women were predominantly employed to do calculations. Back in the 1930s and 1940s, people who performed calculations were called "computers," and the majority of this work was carried out by women. (Hayles 1) Anne Balsamo, Hayles writes, references this terminology in her book Technologies of Gendered Body, when she begins one of the chapters with the line “My mother was a computer,” which reflects her mother’s actual work as a computer. Balsamo uses this family history to reflect on the gender implications of information technologies. (Hayles 1) An illustration of this idea can be seen in the Making Core Memory project, a collaborative project from the University of Washington's Tactile and Tactical Design Lab. The project aimed to recognize the hidden labor involved in assembling core memory—a primitive form of computer storage initially woven by hand by individuals known as "Little Old Ladies." (Rosner et al. 1) The project involved the creation of an electronic quilt and a series of interactive workshops that materialized the efforts of the core memory weavers. Core memory played a pivotal role in computer systems during the early Cold War era, including the Apollo mission computers, where information was stored using threaded wires around magnetized rings. NASA engineers referred to this hardware as "LOL memory" for the “Little Old Ladies” who carefully wove wires around small electro-magnetic ferrite cores by hand. The project highlights the gendered craftsmanship that underlies digital production and acknowledges the often-overlooked contributions made to engineering advancements. (Rosner et al. 1)
The historical shift from human to machine labor raises an array of issues about the relationship between humans and machines such as the figure of the (gendered) body as a component of the machine. This idea is also present in the relationship between weavers and weaving machines, as the weavers interact closely with the weaving looms, treating them as integral components of the weaving process. This is especially exemplified in back strap looms, one of the oldest weaving technologies where one end of the loom is harnessed around the waist of the weaver with a backstrap. Traditional Korean clothing materials for summer such as ramie and hemp fabrics were woven on back strap looms and the technique of weaving ramie fabric produced in Hansan, Seocheon-gun, Chungcheongnam-do is registered as a UNESCO Intangible Cultural Heritage and is passed down to this day. Directing our attention to ancient looms and embodied weaving techniques such as the back strap loom has the potential to provide us a broader understanding of our connection with technology and computation.
Conclusion
In this essay, I explore the correlation between traditional crafts such as weaving and computation. More specifically, I draw attention to modam, the traditional Korean woolen carpet. The ancient form of weaving and its technologies hold untapped potential for revealing a deeper understanding of its connection with computation, beyond the familiar narrative surrounding the Jacquard loom. Traditional craftwork has taught me more valuable lessons about technology than I expected. My experience of working on a weaving loom informed me a lot about physical, tangible forms of interaction with technology. Spending hours manually setting up the loom, passing each thread into the heddles made me feel connected to the machine in an unexpected way. The whole body interacting with the loom, throwing the shuttle across the warp, and controlling treadles to see your pattern emerge on the fabric gave me a sense of control that I’m working with the machine, not dependent on it. Weavers can be comparable to early human labor as computers in the realm of information processing, as both were integral components of the mechanized workforce. In my continuous research, I hope to explore deeper into the historical, cultural, and technological intricacies of modam production. I anticipate that this will uncover surprising insights into our present-day technology and our relationship with it.
Works cited
Broudy, Eric. The Book of Looms: A History of the Handloom from Ancient Times to the Present. Brandeis University Press, 2021.
Ceruzzi, Paul E. Computing: A Concise History. The MIT Press, 2012.
Cocker, Emma. “Weaving Codes/Coding Weaves: Penelopean Mêtis and the Weaver Coder’s Kairos.” Textile: Cloth and Culture, Volume 15, Issue 2, 2017, pp. 124-141.
Davis, Martin and Virginia Davis. “Mistaking Ancestry: The Jacquard and the Computer.” Textile: Cloth and Culture, Volume 3, Issue 1, 2005, pp. 76-87.
Fernaeus, Ylva; Martin Jonsson, and Jakob Tholander. “Revisiting the Jacquard loom: Threads of history and current patterns in HCI.” Conference on Human Factors in Computing Systems – Proceedings, 2012, pp. 1593-1602.
Griffiths, Dave. “Coding With Threads: Frame Loom.” Weaving Codes – Coding Weaves, Dec. 2014, https://kairotic.org/2014/12/22/coding-with-threads-frame-loom/#more-120
Harlizius-Klück, Ellen. “Weaving as Binary Art and the Algebra of Patterns.” Textile: Cloth and Culture, Volume 15, Issue 2, 2017, pp. 176-197.
Hayles, N. Katherine. My Mother was a computer: Digital subjects and Literary Texts. Universtiy of Chicago Press, 2005.
Moon, Hee Won. A study of manufacturing technique for reproduction of ‘Mo-dam' owned by Seoul Museum of Craft Art. Graduate School of Convergence Cultural Heritage, Korea National University of Cultural Heritage, MA dissertation, 2019.
Morabito, John Paul. “Weaving Beyond the Binary.” Textile: Cloth and Culture, Volume 0, Issue 0, 2022, pp. 1–15
O’Shea, Lizzie. “Collaborative Work Is Liberating and Effective: Poetical Philosophy, from Lovelace to Linux.” Future Histories: What Ada Lovelace, Tom Paine, and the Paris Commune Can Teach Us about Digital Technology. Verso, 2019, pp. 119-144.
Paintings in Thread MODAM, The Carpets of Joseon Dynasty. Daegu National Museum, Dec. 17. 2021. Pamphlet.
Plant, Sadie. “The Future Looms: Weaving Women and Cybernetics.” Body & Society, Volume 1 (3-4):20, 1995, pp. 45-64.
Poague, Susan Aileen. Computer Design in the Handweaving Process. 1987. Iowa State University, MA dissertation.
Rosner, Daniela K., et al. “Making Core Memory.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, https://doi.org/10.1145/3173574.3174105.
Schneider, Birgit. “Programmed Images: Systems of Notation in Seventeenth- and Eighteenth-Century Weaving.” The Technical Image: A History of Styles in Scientific Imagery, edited by Horst Bredekamp, Vera Dünkel, Birgit Schneider, The University of Chicago Press, 2015, pp. 142-156.
Zemanek, Heinz. “Computer Prehistory and History in Central Europe.” American Federation of Information Processing Societies (AFIPS) Conference Proceedings 45, 1976, pp. 15-20.
Freja Kir
Glitchy, Caring, Tactical: A Relational Study Between Artistic Tactics and Minor Tech
Glitchy, Caring, Tactical: A Relational Study Between Artistic Tactics and Minor Tech
Abstract
This paper directs attention to the parameters of creative resistance to large-scale commercial digital platforms. It does so by enhancing the understanding of minor tech through the analysis and case study of the artwork, VPN. While minor tech might sound unfamiliar to the many, examples of its existence are at the same time incredibly familiar through examples of digital commoning, sharing of skills, and organisational systems. In the case of VPN, the work existed as a growing emancipatory multimedia archive, executed as a transparent server architecture revealing its technical workings to its users. This format exemplified tactics of intentional glitches through an artful inclusion of persons, space, and objects. By identifying the elements of tactics and care within the VPN, the paper draws parallels of overlapping tendencies within the movement of minor tech. Drawing on Olga Gorionova's research on 'Shadow librarians' and including former digital examples of knowledge sharing furthermore assists in sketching a web development towards the nature of minor tech and VPN. By analysing the significance of these initiatives, the paper raises the questions: What are the drives across creative resistance practices? And (how) do such creative contributions help to critically nuance various existing understandings of large-scale digital platforms?
Introduction
As commercial digital platforms enable and constrain social action in various domains, their elusive structures increasingly govern spatially dispersed entities through digital devices, measurements, and registries. In order to critically engage with new developments at this scale, it is crucial to understand their drivers and, in this case, how resistant practices are helping to denote and substitute digital power structures critically.
In this paper, I discuss the parameters of creative resistance to large-scale commercial digital platforms. With a contemporary focus, I draw on historical examples of creative counter-tactics of digital knowledge sharing to address this tendency. By analyzing the participatory fileserver and artwork, VPN, I bring particular attention to the significance of its transparent server architectures and parallels with the care, conditions, and drives of the minor tech movement that critically rejects digital platform operations. What are the drives across creative resistance practices? And (how) do such creative contributions help to critically nuance various existing understandings of large-scale digital platforms?
The paper is divided into six sections: Diving straight into the personal encounter with VPN, I first introduce the artistic case study and theoretical framework that will unfold throughout the paper; secondly, I include a historical, technological context, which supports the following third section of introducing the concept and notion of minor tech; in the fourth section I look to the doings of VPN through the objective of artistic tactical media; and finally, the last sections consider the techno-cultural gestures between minor tech, knowledge sharing and artistic examples, including drives and the aspect of care and maintenance.
An emancipatory file server
Imagine a locally disrupted online platform: a scattered illustration turns up on your smartphone screen: the circular pattern turns into a globe, then an installation setting, and finally into the shape of a famous cartoon character, all happening along with a twisted interference of sounds and texts. Scattered letters start interfering with the scroll: “Nodes are elastic homes and links are dynamic roads, and each one is guiding you through a different story.” (VPN screen excerpt).
The scene unfolding describes the features of the emancipatory file server, VPN (Virtual PUB Network) (fig. 1). If reaching this screen intervention, you have reached the landing page and are physically near one of the nodes connected to the VPN installation. From this spot, all visitors have access to read and contribute to the growing archive of written, visual, and recorded content of the artwork and emancipatory fileserver, VPN.
The first personal encounter with VPN was when it served the purpose of mapping and archiving the graduation show of the art and design postgraduate institution Sandberg Instituut (Amsterdam) (2019). However, whereas archives typically help to create order, the visual interfaces of VPN were location-dependent and coded to intentionally disrupt the user’s scroll. Functioning as an open-source instrument, VPN was presented as a framework for circulating knowledge through shared visual, lingual, and vocal material.
During the graduation show, this format allowed the nodes to serve as a live feed, growing collective archive, and local navigational information service across four fixed nodes and one nomadic unit connected to the network. Technically, one node functioned as a server; the second was for the live feed; a third would capture sound; the fourth hosted visuals; and finally, the fifth one was for the text. Across these structures, a proxy had been installed to replace the content provided at each location. With each node installed in various locations, the interface would accordingly adapt and present the archive differently depending on the specific location.
While VPN is site-specific and activated through physical interaction, the VPN is simultaneously scalable and can be installed to work in any context. With a technical set-up that relies on servers of its host institution (in this incident, the Sandberg Instituut), the IT department and the bureaucratic aspects of the internal academic digital network structures got challenged and exposed by the instalment of the VPN.
As the VPN both enacts a platform environment and exposes its infrastructural server architecture across different locations, it becomes relevant to consider the intertwined nature between infrastructure and platform studies to situate the VPN and minor tech within a broader context of media studies. A critical inspection of both infrastructures and digital platforms often requires considering the means for observing. Directing attention to the different ways of making infrastructures more approachable is nothing new. In the field of urban studies, this approach requires seeing not only buildings, shapes and outlines but also the wires and operating systems that shape a city, including considering the conditions that produce such standardised systems (Easterling; Parks and Starosielski). Relatedly, although platform systems are most often systematically and algorithmically guarded, their shapes seem incomprehensible to grasp and are often only possible to imagine (Gillespie). If approaching the platform shape as an imaging technology, artworks such as VPN make possible a three-dimensional entrance to envision the technical actions behind the two-dimensional screen (Siegert). In this context, infrastructure studies contribute a valuable sociotechnical consideration of expansive and often governed systems and services related to digital platforms at various scales (Plantin, 2016).
The parallels between the artwork VPN and the movement of minor tech are strongly driven by the activation and exposure of critical infrastructures and the systematic distance to digital corporate platforms services. Approaching the VPN and minor tech as examples of an expanding relation between infrastructure and platformization studies possibly also reflects an expanding horizon within digitally related media studies that allows for an increase of alternative objectives such as artistic studies.
With reference to Kellers Easterlings’ call for active, creative approaches to cooperate infrastructures, the example brought forward by the VPN provides a transparent server architecture which unveils an active format for exploiting software workings at a local scale. The drive of critically exposing infrastructural systems is attracting increasing attention from various sites of academic, spatial and artistic practice, and several recent artistic initiatives pursue like-minded, disobedient-action-driven research toward alternative narratives.
Such initiatives are, for instance, exemplified by the critical collaborative artistic and academic research inquiry the Underground Division, initiated by Helen Pritchard, Jara Rocha and Femke Snelting as a follow up to 'Possible Bodies’ focus on complex relations of bodies in the context of technotools. Their doings are focused on (but not limited to) the intersection of physical ground and digital sphere, which for instance, is the case in their Extended Trans*Feminist Rendering Program, initiating collective skills for sensing data and investigating contemporary scanning practices through magnetic resonance, ultrasound, and computer tomography. Another related collective example is the Cell for Digital Discomfort, formed as a part of the 2021/2022 BAK fellowship for Situated Practices, which specifically looked into ways of refusing dominant digital platforms, referred to as “totalitarian innovation”. This focus stressed the solution-oriented approach of mono-cultural and corporate digital platforms with little (if any) room for investigating otherwise. Similar to these projects is their critical, collective knowledge that points to the complex realities of discriminating and data mining tech environments.
Chronological tech
When recently attending a debate at the Danish parliament on the critical influences of digital and artificial intelligence at work, the meeting was joined by people from various backgrounds; unions and think tanks, lawyers, doctors, software developers, teachers, and academics sharing observations on the influence of digital developments in everyday life. While opposing opinions were shared, most concerns were reappearing: the protection of data, the lack of democratic process pace to align with technological developments, and finally, a quest for independent initiatives, be it through academic networks, critical workshops, or critical creativity. In a European context, reasons such as GDPR, the expansion of chat GPT, and critical media coverage boosted by whistleblowing and leaked private data probably all played a role in leveraging such critical digital awareness. The public debate brings forth an example of a possible increasing public engagement in digital development and alternatives to corporate digital power structures.
A rough outline of the web, from its anonymous and hardly accessible shape, to its seducing and omnipresent power structure of today, is relevant to understanding the involvement and drives of opposing and independent critical, creative tactics in order to contextualise the significance of minor tech historically.
While web history has many layers, its differences can, as pointed to by Prof. Bogna Konior, be simplified down to the drives for money, data, identity, and anonymity. Connecting these drives to the different web periods, Bogna presented “wormholes and hidden narratives” of the structure of the internet. When considering the significance and accomplishments of creative tactics and minor tech alternatives throughout web history, introducing early cyberspace that was imagined as wild, free and counter-cultural is relevant.
Naturally, looking into the history of the internet reflects more than money, data, and identity but also the development of technologies, ethics, and values. From being an exclusive site of military initiatives and favourable ways to proceed with academic research, Web 1.0 presented a system of static one-way interactions that, despite developing into a public domain, had limited bandwidth and user access (Curran; Lovink). With Web 2.0 (2004), the world wide web introduced a digital platform for user interaction. Web 2.0, with its myriad of tracking algorithms and commercial drives, presented us with automated profile generations and data-hungry thriving digital platforms across various fields (social media, search engines, shopping). In this context, networking services created a structure within which user-produced content could thrive and where online shopping boosted online money-making.
In contrast to the financially driven hay-day giant platforms of Web 2.0, the introduction of Web 3.0 represented a web-based structure aimed at creating autonomous networks and increased privacy by implementing blockchain technologies and systems unchained from personalization. By proposing a cooperating solution, Web 3.0 exists as an alternative to the centralized structure of Internet 2.0 with its handful of dominant digital platforms (most notably Google, Amazon, Meta, Apple, and Microsoft). However, while Web 3.0 was introduced nearly a decade ago, most digital platforms (and users) were shaped in the context of the worldwide Web 2.0. Despite its decentralized mechanisms, Web 3.0 has not been adopted by most users but remains used by smaller, minor, counter-tech communities.
Minor Tech
In essence, the Internet is based on a protocol system. In general terms, protocols structure the relation, order, and chronology between all units embedded in the network. This approach arguably turns the accessible and readable code into a force of the movement. The development of minor tech draws links to organized protocols, conscious computing, and digital services that offer alternative platform usage. Minor tech is often organized around, within, and through small communities of users and often overlaps with significant movements of copyleft, Free/Libre + Open Source Software.
Admittedly the notion of ‘Minor Tech’ was unheard of to me before contributing to the workshop ‘Towards a Minor Tech’ organized by the Digital Aesthetics Research Center, Aarhus University, and the Centre for the Study of the Networked Image (LSBU). Before the workshop, I consulted a good friend and collaborator of mine with the intention of deepening my knowledge and discussing artistic practices concerning the topic. Instead, it turned out that the friend, too (active across various artistic and open-sourced initiatives and networks), needed to familiarize themselves with the notion of minor tech. While neither of us had come across the term before, the concept and values driving minor tech were familiar. When relating this paper to the topic of minor tech, this personal encounter seems relevant to mention, as I have come to realize that it also reflects a part of the nature of minor tech: firstly, that minor tech is (at the moment of writing) not known to the mass but to the dedicated minorities, and secondly, that minor tech reflects a set of values more than a methodological set of rules. However, while minor tech might be unfamiliar to the mass, examples of its existence are at the same time incredibly familiar in examples of digital commoning such as knowledge-sharing platforms and socializing services, some of which will be returned to later in the paper.
With a nod to the digitally located Damaged Earth Catalogue, initiated by Marloes de Valk, the concept of minor tech is presented as “small” tech solutions that operate at a human scale and are motivated by a drive for digital privacy, resource minimalism, environmental consciousness, and collaborative communities. Minor tech presents solutions for everyday-platform navigation that do not involve commercial platforms and leans towards a DIY approach driven by a critical response to the current Web 2.0 corporate platform giants. As a kind of minor tech initiative in itself, the Damaged Earth Catalogue functions as an “evaluation and access device” for tracing actions and initiatives of collaborative and intimate counter-reactions to capital, commercial, and political power. While the movement of minor tech can be traced with a focus on political connotations, it is in the context of this paper, not so much the political theme that I draw on, but rather examples of the forms of the tactics in use that exploits, mimic, and reshapes existing online infrastructures in the quest of critical alternatives to corporate digital platforms.
Tactical tech
Despite the scattered illustrations mentioned in the first section, what makes the case study of VPN of interest is not so much the meaning of the content but the physically dependent disruption of it. Under regular conditions, a server remains static; however, due to the disrupted content of VPN, the server transformed into an active element of the experience. The artists describe these disrupting features to display both “strength and weakness” (VPN reader). Thereby when generating a visual interface across the different locations, VPN revealed the physical process of the infrastructural system. By revealing the doings of the VPN, the VPN tactic was likewise inviting the user to understand the making of its infrastructure.
Tactical media is not a new interruptive genre to the web but has been explored by activists, journalists, artists, and academics across several contexts. With an overall drive towards possible socio-economical digital changes, Geert Lovink promptly researches critical artistic digital media practices as tactical media. Tactical media is, by Lovink considered as a term that “retain mobility and velocity and avoid the paralysis induced by the essentialist questioning of everything” (Lovink pp 271). In other words, tactical media opposes passive users of commercial digital norms and actively contributes to critical understandings and alternative environments. Besides existing as a critical method for approaching digital media environments, the creative nature of tactical media also require or triggers imagination from makers and users by changing the course of expectations and navigation. This link between general digital interaction and preassumed space is well-researched as “algorithmic imaginaries” by media researcher, Tania Bucher, to describe how mental representations and speculations about the workings of digital algorithms nudge users’ behaviors, interactions, and navigation (Bucher). However, while Bucher focuses on social media mechanisms, the artistically provoked imagination makes possible limitless outcomes for the creative mind. With this in mind, VPN, as well as minor tech, intentionally disrupts the anticipated user interaction and challenges the traditional ways we imagine and interact with everyday digital media.
VPN exemplifies a tendency in contemporary artistic production that intentionally (mis-)read, mimic, and replicate digital platform logic and behaviors through an artful inclusion of persons, space, and objects. However, it is difficult to deny that the most parasitic presence is not caused by minor or artistic counter-movements but rather by the large corporate digital platforms themselves – platforms that have the capacity to expand across digital activities, extract data and adjust algorithms for commercial intentions. Take Google, for example; being the most extensive digital search platform, Google presents itself as a company for search engine technologies, consumer electronics, and software. Meanwhile, the actual business model is based on advertising and data mining. As noted by John Durham Peeters, Google is of such an omnipresent scale that “For many, Google is the internet” (Peeters 329). Similarly, the language and concepts of commoning practices, such as minor tech, are likewise misused and appropriated into capitalist systems (such as gift economy and digital management) (Federici).
Glitchy tech
As you move through the different VPN nodes, the content starts flickering. What in the previous location accumulated a stream of photos now provokes the activation of sound or moving images. At this point, the pace of the VPN interface has become more familiar, and the repetition of content disruption makes it possible to start detecting the interface influence of the work.
While we often become aware of infrastructures only when they glitch or malfunction (Star and Ruhleder), reverting the glitch as an intentional tactic can be detected in various ways across critical, creative projects. One example is the creation of computational ‘Algorythmics’ by media researcher Shintaro Miyazaki, which as a technical tool, would trace and track algorithms by adding sounds to their workings and thereby also reveal their occasional malfunctions (Miyazaki); Ben Grossers ‘Safebook’ application, did not detect, but rather trigger an intentional malfunction of the Facebook interface by stripping its content down to its bare soft grey scroll; and finally, a digital platform initiative such as Cosmos Carl, which gathers and supports creative inventions that provoke glitches in conventional platform services through intentional misusage.
The broken interface that VPN presents us with allows a peak into the digital components that make up the interface, and by considering how these occupy and create space and situate and guide their users, the VPN arguably pushes its users from knowing about the existence of the infrastructure to understanding that infrastructure.
However, while the ability to perceive, analyze, and engage with material matter is central to critical humanistic studies, a posthumanist approach to minor tech and artistic examples assists in contextualizing their drives and intentions. It includes a study that crosses the disciplinary boundaries between software studies, artistic platforms, and cultural production online and questions the role of art in the context of the technological matter (Gorionova, “Participatory Platforms and the Emergence of Art”). While the technical posthumanism that Olga Gorionova brings forth is introduced as a political tool for considering non-human species, it is helpful to consider how the VPN, for instance, presents and accumulates knowledge-sharing interventions as an attempt for renewed digital engagement. From this position, the VPN suggests a twisted instrumentality as an alternative to data-centred automated services, allowing glitches and failures to produce reasoning. Minor tech, and specifically the VPN if considered as a transparent server architecture for knowledge sharing, sets an interesting example of how to reconfigure the meaning and understanding of specific technologies and comes to echo an effort of digital commoning. While the commons often refers to the sharing aspects of land, natural resources, and related infrastructural systems (Federici), the thriving drive for radical movements and alternatives to capitalist models, along with aspects such as knowledge sharing, non-hierarchical organizing and collective decision-making processes likewise apply to the artistic contributions and minor tech presented in this paper. This inclusive nature follows the ongoing determination for transparency within mechanical transactions (Braidotti). In this context, the posthumanist lens inevitably contributes an intersection of both the material and immaterial, which in the context of minor tech, unveils the intersection of technological structures, collaborative practice, and non-hierarchical structures – be it through glitches, tactics or alternative tools.
Caring tech
The technical complications of projects such as VPN exist of material limitations, from electricity plugs to internet connection. Starting as a software project, the VPN grew into a compilation of nodes, proxies, and hosting to become a system of maintenance and care eventually.
While the front-end presentation of VPN content seems distorted, the contributions of the VPN database were locally accumulated and made up of internal publishing initiatives across the Sandberg Institute. Ironically, the messy reality of the data-collecting process simultaneously worked as a cross-departmental knowledge-sharing of internal learning. Effectively, as more and more people were using the VPN, the more it became about maintenance, accessibility, and care.
To consider the significance of nurturing labour behind the VPN and minor tech, the work by Goriunova on ‘Shadow librarians’ enlightens the subject relevance for creative commoning knowledge sharing. To consider how knowledge commons, in this case, digital libraries, are generated and maintained, their subject positions are relevant in interactions with humans and non-humans. Defining ‘meta librarians’, ‘public custodians’, ‘general librarians’, ‘underground librarians’, ‘critical public pedagogues’, ‘multiform bibliographers’, ‘fancy general archivists’, and ‘cultural analysts’, Gorinova covers the understated tasks of amateur historians and librarians. ‘Shadow librarians’ provide and caretake online free and open infrastructures that enable users to share and debate digital texts and collections (Gorionova, “Uploading Our Libraries: The Subjects of Art and Knowledge Commons”; Marcell and Medak). These tasks not only remind us of the previously mentioned Damaged Earth Catalog but also resemble knowledge-sharing and caring structures of several critical projects from the early digital era. Considering minor tech in the extension of former critical technical accomplishments helps sketch the longer-term relevance of minor tech accomplishments. However, to concretize and embed the tasks of the amateur historians and librarians close, we detour to a personal encounter;
Next to his computer engineering duties, the co-founder (and also my dad) of Leksikon.org fulfils the role of a guardian, custodian, and amateur historian of the critical and politically charged online Danish encyclopedia Leksikon.org. With a motto that ‘doubt everything’ and an About section that states how the encyclopedia ‘is not, and does not, intend to be neutral’, the tone is set.
Leksikon.org is a non-profit initiative run by volunteering engagement and initiated to produce an alternative narrative to those arriving from power positions. Leksikon.org originates in the Web 1.0 era and predates the much more famous encyclopedia, Wikipedia (which did not register its domain before 2001). While Leksikon.org is not driven as a counteraction to large-scale digital platforms like minor tech or the VPN, it draws on similar efforts for bottom-up knowledge generation and sharing. This approach to the encyclopedia is not unique but draws inspiration from the Norwegian leftist publishing initiative PAX (1978-82), which, through cheap printed catalogues, gathered and provided knowledge from an international radical left point of view. The organizational structure of Leksikon.org is made up of a large number of contributors and translations of selected texts. While everyone can contribute, the submissions are filtered by smaller editorial groups of the organizing team. Similar to many minor tech initiatives, the content is hosted on a private server, which reflects both financial constraints and protection from right-wing hacks. When scrolling through the different entries and the expansive country section (counting 247 entries), one inevitably stumbles on outdated spots, reflecting both how the process of updating such a project is immense and, at the same time also, the slower pace of caretaking that projects like Leksikon.org require. The work behind Leksikon.org consists of multiple late-night hours in front of a stationary computer, nourishing the encyclopedia, researching, translating, writing, and maintaining coding. As counter-publics often occur when there is little or no room for independent participation (Warner), Leksikon.org exemplifies such a techno-cultural drive for knowledge sharing to an engaged public through the various tasks of ordering, converting, sorting, and translating knowledge. Through caring and sharing communal structures, counter practices, in these cases, become tools that demonstrate knowledge as a process of public-making. Despite its differences, Leksikon.org contributes an example of a digital predecessor to minor tech. It constitutes an example of disobeying conventional knowledge sources and forming new subject positions from which new sociopolitical directions can take form.
Conclusion
While we struggle to make sense of how corporate digital developments activate and direct, manipulate, and exhaust online environments - creative works show the potential to reveal such fabrics by visualizing, materializing, or simulating everyday software operations.
This writing presents artistic scenes unfolding in or around the peripheries of alternative artistic and technological practices. In this context, the VPN exemplifies a work that suggests how technical creative tactics may provoke interaction between the institution, the users, and their spatial surroundings. This kind of user interaction forms an analogy that makes us aware of the digital infrastructure and disrupts how most current corporate technology insists on smooth interaction that does not interrupt the experience. The VPN aims to downscale technology to a graspable level.
Ultimately, the VPN aims to downscale technology to a graspable level. Situating the VPN and minor tech in parallel with each other allows us to address large tech critically while learning about small-scale tech intimately.
Acknowledgements
Thanks to Raphaël Bastide for inspiration, to Frederique Pisuisse for Cosmos Carl insights and to PUB VPN for sharing their instructions made in collaboration with LAG lab for VPN.
Works cited
Berlant, Lauren. “The commons: Infrastructures for troubling times*.” Environment and Planning D: Society and Space, vol 34, no. 3, 2016, pp. 393-419. https://doi.org/10.1177/0263775816645989.
Bernhard Siegert, Cultural Techniques. Grids, Filters, Doors, and Other Articulations of the Real, trans. Geoffrey Winthrop-Young, Fordham University Press, 2015, pp. 121-146.
Bowker, Geoffrey C. and Star, Susan Leigh. Sorting Things Out: Classification and Its Consequences, The MIT Press, 1999.
Braidotti, Rosi, and Hlavajova, Maria. Posthuman Glossary, Bloomsbury Publishing, 2018.
Bucher, Tania. “The Algorithmic Imaginary: Exploring the ordinary affects of Facebook algorithms.” Information, Communication & Society, vol 20 no. 1, pp. 30-44, 2017. https://doi.org/10.1080/1369118X.2016.1154086.
Cochior, Cristina; Karl Moubarak, and Jara Rocha. "Digital Discomfort." PROSPECTIONS, Autumn 2022. https://www.bakonline.org/prospections/on-digital-discomfort-editorial/#_ftn3.
Curran, James. Rethinking internet history, Routledge, 2012.
Easterling, Keller. Extrastatecraft, Verso, 2014.
Entangled Transparency 3.0. ZKM, Karlsruhe. Online conference, 2023. https://zkm.de/en/event/2023/03/entangled-transparency-30.
Federici, Silvia. “Feminism and the Politics of the Commons.” The Commoner, 2011<http://+journals.kent.ac.uk/index.php/feministsatlaw/article/view/32>
Gillespie, Tarleton. “The platform metaphor, revisited.” Digital Society Blog, 2017. https://www.hiig.de/en/the-platform-metaphor-revisited/.
Goriunuova, Olga. “Participatory Platforms and the Emergence of Art.” A Companion to Digital Art, edited by Christiane Paul, Wiley, 2016
Goriunova, Olga. “Uploading Our Libraries: The Subjects of Art and Knowledge Commons.” Aesthetics of the Commons, edited by Shusha Niederberger, Cornelia Sollfrank and Felix Stalder, 2021, pp. 41-61
Grossers, Ben. Safebook, 2019. https://bengrosser.com/projects/safebook/.
Lovink, Geert. Dark Fiber: Tracking Critical Internet Culture, The MIT Press, 2002.
Mars, Marcell, and Medak, Marcell. “Against Innovation: Compromised Institutional Agency and Acts of Custodianship.” Ephemera 19, no. 2, 2019. http://www.ephemerajournal.org/contribution/against-innovation-compromised-institutional-agency-and-acts-custodianship.
Monteagudu, Graciela. “Women Reclaim the Commons: A Conversation with Silvia Federici”, North American Congress on Latin America (NACLA), vol. 51, no. 3, 2019, pp. 256-261. http://dx.doi.org/10.1080/10714839.2019.1650505.
Miyazaki, Shintaro. “Algorhythmics: Understanding Micro-Temporality in Computational Cultures.” Computational Culture, vol. 2, 2012. http://computationalculture.net/algorhythmics-understanding-micro-temporality-in-computational-cultures/.
Parisi, Luciana. "Media Ontology and Transcendental Instrumentality." Theory, Culture & Society, Vol. 36, No. 6, pp. 95-124, 2019. https://doi.org/10.1177/02632764198435.
Parks, Lisa, and Starosielski, Nicole, eds. Signal Traffic: Critical Studies of Media Infrastructures, University of Illinois Press, 2015.
Peters, John Durham. The Marvelous Clouds: Toward a Philosophy of Elemental Media, University of Chicago Press, 2015.
Pisuisse, Frederique and Helgason, Saemundur Thor. Cosmos Carl. https://cosmoscarl.com/.
Plantin, Jean-Christophe. “Infrastructure studies meet platform studies in the age of Google and Facebook.” New media and society, vol, 20, nr. 1, pp. 1–18, 2016. https://doi.org/10.1177/1461444816661553.
Rocha, Jara, Femke Snelting, and Helen Pritchard. Extended Trans*Feminist Rendering Program, The Underground Division, London, 2020. http://ddivision.xyz/.
de Valk, Marloes. Damaged Earth Catalogue. https://damaged.bleu255.com/.
Warner, Michael. "Publics and Counterpublics." Public Culture, Duke University Press, Vol. 14, no. 1, 2002, pp. 49-90.
Woodgate, Agustina; Gomez, Miquel Hervas and Krischock, Sascha.VPN, 2019.
xenodata co-operative (Alexandra Anikina, Yasemin Keskintepe)
Spirit Tactics:
(Techno)magic as Epistemic Practice in Media Arts and Resistant Tech
Spirit Tactics: (Techno)magic as Epistemic Practice in Media Arts and Resistant Tech
Abstract
Speculative narratives of (techno)magic such as those offered by feminist technoscience, cyberwitches and techno-shamanism come from knowledge systems long marginalised in a hyper-optimised and hard-science-reliant capitalist discourse. Aiming to de-centre Western rational imaginaries of technology, they speak from decolonial and translocal perspectives, in which the relations between humans and technology are reconfigured in terms of care, relationality and multiplicity of epistemic positions. In this paper, we consider (techno)magic as an act of transgressing a knowledge system plus relational ethics plus capacity to act beyond the constraints of the current capitalist belief system. (Techno)magic is about disentangling from commodified forms of belief and knowledge and instead cultivating solidarity, relationality, common spaces and trust with non-humans: becoming-familiar with the machine. What critical approaches, epistemic and aesthetic procedures do these speculative practices enable in media art and resistant tech? In what ways does “magic” act as an alternative political imaginary in the age of hegemonic Western epistemologies? Drawing on feminist STS and the works of artists such as Choy Ka Fai, Omsk Social Club, Ian Cheng, Suzanne Treister and others, we propose to address (techno)magic seriously as an ethical and epistemic practice.
Sorcery? It is a metaphor, of course? You don't mean that you believe in sorcerers, in 'real' sorcerers who cast spells, transform charming princes into frogs or make the poor women who have the bad luck to cross their path infertile? We would reply that this sort of accumulation of characteristics translates what happens whenever one speaks of the 'beliefs' of others. There is a tendency to put everything into the same bag and to tie it up and label it 'supernatural’. What then gets understood as 'supernatural' is whatever escapes the explanations we judge 'natural’, those making an appeal to processes and mechanisms that are supposed to arise from 'nature' or 'society’. – Philippe Pignarre and Isabelle Stengers, Capitalist Sorcery: Breaking the Spell
Introduction
Recent exhibitions demonstrate an interest in technology as connected to, intermixed with or implicated in magical practices. Inke Arns’ Technoshamanism (2021) at HMKV Hartware MedienKunstVerein, in Dortmund, Germany, was, perhaps, the most directly relevant to the topic. Post-Human Narratives—In the Name of Scientific Witchery (2022) at Hong Kong Museum of Medical Sciences, curated by Kobe Ko, explored para-scientific, esoteric and unorthodox medical practices mixing science and witchcraft. Wired Magic (2020) at Haus der Elektronischen Künste Basel, curated by Yulia Fisch and Boris Magrini, focused on the rituals and methods of artists intertwining magical practices with technology. Recently, The Horror Show! (2023) at Somerset House, London, contained a section titled Ghost, which outlined the British history of post-spiritualist hauntologies of electronic media.
As Jamie Sutcliffe notes at the launch of Magic, a collection he edited in the Whitechapel series Documents of Contemporary Art, the interest towards magical practices in arts reemerges every few years. [1] However, the specific intersection of the magical and the technological also tends to follow waves of innovation and the consequent waves of anxiety about technology within public discourse (as can be seen even in the recent rise in apocalyptic debates about artificial intelligence after the launch of ChatGPT). They often refer to the famous quote by Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic” (Clarke, n.p.). What does this quote say about these debates? More often than not, it is understood as a necessity for linear progress: if technology is to advance sufficiently, it must undergo the process of development. It also implies that advanced technology cannot avoid being opaque: its internal operation must be inaccessible to the use, casting the human-technology relationship into the categories of 'belief' or 'trust'.
The anxiety-driven narratives tend to forego the issues of ethics and care in favour of driving catastrophic imaginaries of technology. With this in mind, we would like to situate our proposition of (techno)magic by taking it outside of the binary of rationality and irrationality. Rather, we would organise it around the following question: what place is accorded to magic in the current discourses of technology, both fueled by and shielded from practices of belief?
Situating (Techno)magic
If we approach magic and technology as fields of knowledge with specific genealogies, we will often find them entangled. Erkki Huhtamo outlines the archaeology of magic in media, pointing out that the development of media technologies is closely tied to magic, from Mechanical Turk to moving images and animation (Huhtamo). In the West, the Victorian history of spiritualism and mesmerism, ghost photography and technologically aided 'neo-occult' séances directly connected supernatural forces, energies and spirits with the newly introduced technological and scientific advancements (see Chéroux et al., Mays and Matheson). Jeffrey Sconce in Haunted Media addresses a particular kind of electronic presence, “at time occult” sense of liveness or “nowness” that inhabits the electronic media. This history extends back to the invention of modern means of communication that introduced simultaneity and immediacy as radically new types of experience of other people’s voices and images, such as with the introduction of telegraph by Samuel Morse in 1844 in the USA, or photography by Louis Daguerre in 1839 in France. [2] These histories (while a close look at them is beyond the scope of our current exploration) bring an interesting dimension to the intersections of magic and technology.
First of all, the contemporary idea of 'magic' itself is constituted and situated as a term created by Western modern technologies and Western orientalism, where the inevitable categorisation of unexplained phenomena either as scientific truths or as magical illusions played a significant role in the construction of the myth of contemporary science as rational and infallible. Secondly, while 'magic' as a term serves to further underscore the terms 'science' and 'technoscience' as rational, magic as such simply refers to alternative knowledge systems in which the myth of rationality is not the dominant one, and other cosmologies can come to the fore. Depending on how magic is understood within these two senses (as a Western term for everything irrational or as a word referring to cosmologies outside of it), and what kind of knowledge system stands behind it, we can construct multiple interpretations of magic, including the ones where magic is read as modernity’s ultimate technology, and ones where magic is proposed as alternative to technology. In line with the first understanding, Arjun Appadurai speculates that “capitalism… can be considered the dreamwork of industrial modernity, its magical, spiritual and utopian horizon, in which all that is solid melts into money” (481).
The second understanding of magic as an alternative to technology can be approached through the work of Federico Campagna, for whom Magic and Technic are two of the many possible “reality-settings” - “implicit metaphysical assumptions that define the architecture of our reality, and that structure our contemporary existential experience” (4). He sees Magic as oppositional to Technic: if Technic’s first-order principle is the knowability of all things through language, Magic’s first and original principle is that of the “ineffable”, where “the ineffable dimension of existence is that which cannot be captured by descriptive language, and which escapes all attempts to put it to ‘work’ - either in the economic series of production, or in those of citizenship, technology, science, social roles and so on” (10). While we do not agree on the juxtaposition of Technic to Magic, we find the exploration of “the ineffable” a very important distinction, especially for the quantified world of digital culture: “being put to work” means not only the physical labour process, but also various data being put to work within a statistical model, or being valorised in any other way.
The domain of the (techno)magical is the domain of epistemic acts, or acts of knowledge construction, especially in media arts and resistant tech practices. We are also interested in seeing the potential impact of such reframing on the ethics and epistemics of human-technology interaction and for developing relations of care with and via technology with others and the world. We approach this from the perspective of our encounters with the concepts of magic in the Western art and technology scene, and from our positions as Western-educated curator-researcher and artist-researcher.
It is also important to underline that the kind of 'magic' that we mean comes from contemporary artistic research where the magical is interpreted politically: borrowing further from the discussion of Documents of Contemporary Art, we are not interested in “esoteric transcendentalism or results-based magic” but rather in “the aspect of ritual that allows for an encounter with otherness in the self”, or “wonderment” (Whitechapel Gallery). Magic, and especially magical rituals, serves as a de-habituation from the naturalised behaviours of epistemic systems we find ourselves in.
What we call (techno)magic, then, is understood, first of all, as an act of granting access to an alternative knowledge system. It retains the "techno" part in brackets in order to preserve doubt about the false separation of the types of knowledge represented by the two parts. (Techno)magical constructs in media art and resistant tech can act as interventions into knowledge frameworks of late techno-capitalism, extending the relations of care and dissolving the hierarchies of knowledge production inherited from Western modernity.
The urgency of such care within the entanglements of technology with the world is particularly clear now. As Eduardo Viveiros de Castro argues, Anthropocene-thinking requires reassessing the predominant modes of operation in order to consider the heterogeneity of living and being in the world. In Technoshamanism (2021), Inke Arns underlined ecology as the central idea of the exhibition; for her, the return to shamanic and animist practices “has to do with the fact that we are living in a time when we realise that the system we have had up to now is also serving to destroy the world as we know it” (Arns). The turn toward alternative knowledge systems also allows implement change in the contemporary conceptions of technology, along with speculations on what kind of world they could engender. The ecological, feminist, decolonial approach is crucial in (techno)magical practice.
What we also want to emphasise against the backdrop of other entanglements of technology and magic, is that the question lies not only in the opposition of magic to technoscience within the rationality-irrationality binary, but also in what potential is there for the magical to reinscribe the discredited meanings of the notion of belief. The magical, in the sense that we propose to consider here, activates a different modality of the word 'belief' than the commodified belief systems within capitalism. Rather, belief stands for a long-denied possibility of an alternative political imaginary (one that, as Mark Fisher suggests, is excluded within capitalist realism (Fisher)). Within capitalism, belief can only be exercised without judgement within the confines of certain institutions, such as a temple, a church, a hospital, a rave, or an art space. In the same way that it discredits other belief systems, the neoliberal mind does not allow 'magic' into realms of serious consideration, inflecting it with a categorical epistemic downgrade, especially when it comes to research.[3] It is also not by mistake that the most popular magical story of the last thirty years is, essentially, a bureaucratised and regulated environment of a school for wizards. Therefore, in our thinking, this is the core provocation of magic: it activates the systems of belief in a space where they are not supposed to be activated. And non-religious belief seems like a precondition for convivialist politics of coexistence, joyful labour, care and non-hierarchical relationality.
At the same time, we are not suggesting that magic is a universal solution to capitalism; it's not possible to exit into magic as some kind of a primordial innocent state, and no knowledge system can play a role of a 'noble savage' at this point in history. To us, magic is a granular, messy middle situated between sliding and not always matching scales of epistemic conditions and politics. This is important in the processes of construction of belief in relation to the scale of technology, which operates differently at the levels of “minor tech” (Andersen, Cox) and at the scaled-up, infrastructural level of corporations and states.
These considerations situate our definition: we understand (techno)magic as an act of transgressing a knowledge system plus relational ethics plus the capacity to act beyond the constraints of the current capitalist belief system. (Techno)magic offers two immediate propositions, in that it 1) accepts 'naturecultures' instead of a binary divide between technology and nature; and 2) inserts new granular relationalities between existing extremes, creating 'minor' rather than grand narratives.
In the first proposition, (techno)magic could be called “ethico-onto-epistemological”, following Karen Barad’s suggestion of the inseparability of ethics, ontology and epistemology (Barad, 90), precisely because it exists at the intersection of politics of nature and culture that argues against separation of these philosophical entities, and because it lends itself to problematising the experiences of the self and being-in-the-world.
Returning to the second proposition, in which (techno)magic complicates the relations of scale by inserting granular relationalities: technology, in relation to magic, should be liberated from being a despirited tool (a hammer), or from being a magic-wand type solution to the world’s problems; (techno)magic activates a possibility of the ineffable, and therefore, uncapturable of magic in certain space-times inside techno-capitalist infrastructures. (Techno)magic does not simply become a technological prosthesis, but also does not become completely externalised as a miracle. Rather, its minor narratives are about acts of personal becoming political through interaction. The relationality of 'becoming-familiar' with the machine can be read as a literal familiarising yourself with a machine or technology that is unknown, and experiencing joyful co-production once the machine becomes known to the body and to its epistemic operation. But it can also be read as becoming-closer, like a familiar of a witch, meaning a useful spirit or demon (in European folklore) with whom a contract is made to collaborate. What is important here is the context of opening up new capacities to act, or capacities to act differently in a reality that was previously hidden.
Having proposed to take magic seriously as an ethical and epistemic practice, we would like to offer methodological speculation on what kind of practices could be considered within the remit of the (techno)magical, following these two propositions. One example is a ritual-based work by artist Choy Ka Fai. Rituals are important relational practices since they weave together physical bodies through a set of symbolic actions that allow participants to build relationships with each other, with technologies, as well as other entities with the aim of bringing forth a transformational process for the self.
Choy Ka Fai’s audio-visual performance Tragic Spirits (2020) from his project CosmicWander (2019-ongoing) investigates how shamanic rituals in Siberia in their histories and present constitutions intersect with broader environmental, technological and political shifts. The performance combines audiovisual sequences (which include documentary footage of the artist's journey and 3D visualisations) with a dance performance. While the human dancer performs on stage, her movements are mirrored by a virtual avatar on the screen, transmitted by motion capture.
The work suggests the interconnectedness between the human body, nature, ritual and technology, culminating in the phrase “I have arrived at the centre of the universe - the universe inside you [me]” (Choy) that appears on the screen during documentary sequences. What Choy Ka Fai suggests is reaching a place and a state of deep connectedness attained through oscillation created by the many components of the ritual. The audio-visual experience, employing music and intense visuals, reaches the point where the energy of sound vibration is felt as a bodily encounter with the magical reality of the 3D figure on the screen.
Speaking of (techno)magic in the case of Choy Ka Fai’s work offers an opportunity to consider what kind of relationality the technological aspects of the work enable in relation to the spiritual ones. While the technology of motion capture in itself focuses on quantifying and abstracting the lived experience and often serves the monetisation and further capture of data’s value, in Tragic Spirits it seems to be employed towards another goal, namely, mediating the experience of facing the ineffable. The movement between the documentary film, the dancer, the music and the avatar creates a closed circuit loop between the bio-techno-kinetics and their representation on the screen. In doing so, the performance weaves the 'blackbox' of technology within a sacred ritual. The motion capture animates the avatar on the screen, allowing the viewers to see the connection between it and the dancer. Yet, considering this bond and the dancer in the traditional sense of shaman entering an altered state of consciousness, the viewers don’t make the same journey as her - the motion capture can mediate and make visible, but can not abstract or data-fy the spiritual journey. This is, precisely, one of the major points of the work: the unknowable must be confronted, seen, heard and experienced without being subsumed.
The potential for human-technology relationality that extends beyond the instrumental and the techno-solutionist, of course, doesn’t have to be restricted to media art or research contexts. It can be traced to a variety of lived experiences of technology, from mundane to techno-spiritual. However, it is in artistic practices that we find useful fissures and tensions, and where politics have the potential to become most immediately visible and negotiated.
Decolonising (Techno)magic
Having established (techno)magic as human-technological relationality, it becomes necessary to further situate it in relation to the ethics and politics of being human: by whom and for whom should this relationality be redefined? Magic has also served as one of the “categorical fictions that would justify both the non-Western and Euro-American proletarian superstitions by colonial and governmental expansion” (Whitechapel Gallery). Seen as an instrument of imperialism and colonial violence, magic designated what kind of worlds and knowledge systems can exist and, by extension, what kind of environments can be destroyed and what kind of voices will be excluded and dominated. Feminist and decolonial (techno)magic, then, needs to engage with the concepts of positionality, care, labour, and embodied experience of life, and demonstrate a particular type of embeddedness that entails awareness of relationality and multiple ontologies.
The recent work of writer and technologist K Allado-McDowell, whose book Air Age Blueprint weaves theory, poetry, AI-generated text and diagrams in what can be read as a manifesto of cybernetic animism and interspecies collaboration. Allado-McDowell constructs a blueprint of a world where AI allows a wider sense of communication and understanding of non-humans, and where human consciousness is augmented entheogenically,[4] meeting this new universe halfway. While the concept of (techno)magic finds parallels with this imaginative work, as it does with the concept of procedural animism (Anikina), it also finds some differences in the treatment of the role of the human. Reading it both as inspiration and with productive critique, we first trace the question of the possibility of decolonial embedded-ness of non-Western cultural traditions in the Western context; and then consider how to position (techno)magic closer to the applied practices of care, relationality and labour.
Air Age Blueprint underlines the importance of belief systems in the current techno-cultural moment:
The age of the human is defined by our quantifiable effects on natural systems… These effects are in inheritance, the expression of a genetic trauma in the belief systems and sociotechnical structures of the modern West, a kind of curse. Redesigning infrastructure away from Anthropocenic destruction is one way of breaking this curse. But to do this we need a new set of beliefs and a new imaginary (67).
For Allado-McDowell, the new imaginary is built on the premise of “interspecies intelligence” (70), achieved through a combination of entheogenically altered perception and AI sensing systems that would make the natural world not only legible to humans but also deeply understood and acknowledged: “the goal is to articulate an Earth-centric myth that meets the requirements of human flourishing in an ecosystem where humans are recognised as animals dependent on birdsong or jaguar vitality for their survival and thriving” (70).
Allado-McDowell underlines that they conceive of “non-speciest thinking of Indigenous cosmologies and shamanic spirituality as a diverse set of ecological epistemologies: different ways of knowing not just through reason or intuition, but also on the level of ontology and practice” (71). This upholds the initial question: how do we conceive of the lifeworlds of others as 'ecological epistemologies' without assimilating them into the language and operation of the late liberalism and Western epistemology - one could argue, often in the same way that the words 'shaman' and 'shamanic' already do?
Allado-McDowell offers precise critiques of that possibility. They are acutely aware that the proposition for the combination of ecological awareness, technology and entheogenic culture can be (and already is to some extent) subject to capitalist capture and extraction. This is true as much for technology (wearables, augmented reality, global connectedness) and shamanic practices (alienated from their original context and reframed as mindfulness or self-care), as it is for entheogenic practices that are being subsumed and redeveloped as novel psychedelic compounds. To decolonise entheogenesis, Allado-McDowell underlines, the crucial steps are required: “more interrogation of the Anthropocene, associated environmental reversals and technoscientific instrumentalism”, combined with urgent critique of capitalism (77).
Air Age Blueprint seems to come from a particular context of capitalism that puts emphasis on entheogenesis, the universalised image of ‘ecosemiotics’ and references to transhumanism and cybernetics. The narrative proposes outlets for emancipation, yet they seem to circulate within the boundaries of the individual rather than collective practice (at least in human terms). At the end of the book, the main character, a filmmaker and poet, freelances as a beta-tester of a new AI program, Shaman.AI. The character is prompted to ‘contaminate’ the database with indigenous knowledge structures they encountered early in the narrative in the Amazonian rainforest while being taught by a healer. The metaphor of contamination, while already existing in real-life interactions with machine learning systems as ‘prompt injection’ (or ‘injection attack’, in cybersecurity language) is, at the same time, a proposal for subversive action and an acknowledgement of the near-impossibility of direct resistance.
Where Allado-McDowell suggests that a future ecosemiotic AI translating between the human and non-human worlds is construed as “what in the Amerindian view might look like a shaman” (71), bringing the Amerindian epistemology into the Western one, we would like to continue the line of questioning into the specific Western politics of imagination, care and labour without choosing a specific magical tradition. In relation to this view, (techno)magic leaves open the question of interweaving specific cultural practices into its understanding of ‘magic’. At the moment, (techno)magic, while taking the considerations we outlined above on board, leaves open the question of interweaving a specific cultural practice of magic. This is an unresolved tension that we reserve as a potential task for future research. Our address to the (techno)magical primarily deals with the messy practice of post-digital culture inheriting from Western modernity, focusing on ethics of relationality as understood by feminist technoscience as ethics that operationalise the terms of labour, embodied experience and care.
The reasons are twofold: first, we are wary of positioning these systems of knowledge as ready-made solutions: indigenous knowledge is not an instrument of care for the Western world. Rebuilding relations of care requires attention to the material and embodied worlds within existing epistemologies. Secondly, in the context of existing media art and resistant tech practices, the ideas of 'magic' come from very different lifeworlds. Some are employing specific vocabularies to describe technology, such as 'spells' or 'codebooks', while not necessarily practicing magic as traditionally understood (some members of varia and syster server collectives). Some directly draw on the existing witchcraft practices (Cy X, a Multimedia Cyber Witch, or Lucile Olympe Haute, artist and author of Cyberwitches Manifesto). The International Festival of Technoshamanism in Brazil unites practitioners who integrate computation, software and hardware into existing systems of belief by techno-mediating rituals and approaching technological artefacts as magical tools, beings or effigies. Following this, if there is a specific tradition of magic to draw upon, there is also a multiplicity of potential (techno)magics, each requiring an exploration of the situated knowledge systems and ethical positions of people who adopt them. What becomes important in the context of the current article is considering how these multiple positions plug into the existing Western epistemics, and how the disruption of the dominant knowledge systems takes place.
Care, Feminist Technoscience and (Techno)magic as Relational Ethics
When we refute the idea of ‘innocence’ contained in non-Western lifeworlds (and, therefore, in their magical traditions), we encounter the acts of belief in the world of Western tech in their own granular and messy context. What we call technology does not preclude non-instrumental relations to the world, and is sometimes directly contingent on unarticulated acts of belief. For example, this happens in places where belief is justified by one or another accepted reason, be it a case of cryptocurrency exchange or a Shintoist robot priest. In the former, it is a pre-approved belief in the fluctuations of value that upholds the existence of the crypto-market; and in the latter, it is the established religious practice that paves the way for technology to be accepted. Similarly, acts of belief are encountered where care is monetised, such as in toys Ai-Bo or Tamagochi, or in the medical field (where care is a valuable resource that can be outsourced to robots). If we let go of these commodified types of belief, what prevents us from making new relations of care outside of the boundaries drawn by techno-capitalism? Lucille Olympe Haute in Cyberwitches Manifesto, for instance, foregrounds magic as a practice of resistance grounded in feminist ethics. She writes about technology and magic without hierarchical distinction:
Let's use social networks to gather in spiritual and political rituals. Let's use smartphones and tarot cards to connect to spirits. Let's manufacture DIY devices to listen to invisible worlds (n.p.).
In the ethos of this manifesto, technology is liberated from the burden of being rational and therefore is reinscribed back into the realm of ethico-political practice. What other practices can we think of that would allow us to inscribe relationality of care into the current technological landscape?
We imagine (techno)magic as a materially embedded and embodied feminist practice that starts from a point in which non-humans, including machines, are not outside of the normative human-to-human relationality. This calls also for the rethinking of the role that non-humans play in it. Maria Puig de la Bellacasa explores this in her book Matters of Care. She calls for deeper integration of the concept of care into the relational and material consideration of the world:
Care is everything that is done (rather than everything that ‘we’ do) to maintain, continue, and re-pair ‘the world’ so that all (rather than ‘we’) can live in it as well as possible. That world includes… all that we seek to interweave in a complex, life-sustaining web (modified from Tronto 1993, 103) (161).
She follows Bruno Latour in underlining that human existence is not dependent and deeply interwoven solely with humans, but rather on many others, including technological things. Latour calls for turning away from “matters of fact” and to “matters of concern” as a resolution of the issue of taking “facts” for granted and therefore voiding the relations with these matters of political urgency. Puig de la Bellacasa then suggests a productive critique of escalating “matters of concern” further as “matters of care,” ”in a life world (bios) where technosciences and naturecultures are inseparably entangled, their overall sustainability and inherent qualities being largely dependent upon the extent and doings of care” (Brons, n.p.).
Turning towards specific entanglements produced by artists, we can consider another ritualistic artwork that reframes technology in relation to belief systems. Omsk Social Club uses LARPing (Live Action Role Play) as a way to create “states that could potentially be fiction or a yet unlived reality” (Omsk Social Club). In each work, a future scenario functions “as a form of post-political entertainment, in an attempt to shadow-play politics until the game ruptures the surface we now know as life” (Omsk Social Club). Some of the themes they explore include rave culture, survivalism, desire and positive trolling. The work S.M.I2.L.E. bears particular interest as a “mystic grassroot” ceremony (Omsk Social Club) that explores freedom from protocols of quantification and efficiency in the age of technological precision. The work starts with each user giving up one of their 5 core senses to engage in synesthetic experiences and reach other states of sensing. The work is, at the same time, a critique of the communities that gather around eco-technological innovation, and a spiritual ceremonial practice through which users are exploring synesthetic acts including being blindfolded, fasting and dancing. These allow users to engage with the LARP structure as a ritual that critiques neopagan constructions for their lack of reflexivity and suggests a local politics of being, interacting, sensing and playing.
It is important to note that the word “users” is chosen by Omsk Social Club to underline the role of the ceremony as a quasi-technology or software for the participants to make use of: the work reactivates machine-human relations as politically engaged and embodied ritual experiences. Omsk Social Club often works outside or between frameworks set by art institutions, engaging with spaces such as raves or the office space of a museum - institutional infrastructures outside of the “white cube”. In doing so, they also reinscribe the format of LARP in the context of art and technology infrastructures, producing critical meaning through the embodied interaction of the players/users. As Chloe Germaine notes, LARP is distinct from other modes of playing in how it prioritises the embodied immersion and “inhabiting both position of ‘I’ and ‘They’ as player-character negotiations” (Germaine 3). Furthermore, Germaine underlines how the “magic circle”, or limits of what is considered an in-game place and what is “out of character area”, allows the players to “hack and transform identities and social relationships” (Germaine 3). In Omsk Social Club’s, “creating a drift between body and mind” (Anikina, Keskintepe) is an important part of the ritualised engagement. LARPing is a kind of “open source magic” and a “theatre for the unconscious” in that it allows the users to get an embodied experience of technology (including the technology of their own body) and practice and experience new political positions (Anikina, Keskintepe).
By way of conclusion: Spirit tactics and aesthetics for anthropocene
Choosing to care actively is the starting point of considering (techno)magic as relational ethics and embodied epistemic practice. (Techno)magic is about disentangling from libertarian, commodified, power-hungry, toxic, conquering forms of belief and knowledge, and instead cultivating solidarity, relationality, common spaces and trust with non-humans: becoming-familiar with the machine. Part of becoming-familiar means letting go of human exceptionalism to an extent: becoming on the scale that, in current theoretical thinking, extends to being posthuman or even ahuman (as Patricia McCormack suggests). Crucially, this perspective means entangling the technological into what could be called “media-nature-culture”, bringing about a “qualitative shift in methods, collaborative ethics and, (…), relational openness” (Braidotti 155). It suggests a material and embedded form of thinking, which increases the capacity to recognize the diverse and plural form of being. Recognising technological mediation, synthetic biology and digital life leads to the emergence of different subjects of inquiry, non-humans as well as humans as knowledge collaborators (Braidotti).
While (techno)magic does often involve particular surface-level aesthetics, and artists working with such contexts often utilise ‘alien’ logos and fonts (OMSK Social Club), diagrams (Suzanne Treister) or sigil-like imagery (Joey Holder), the question of aesthetics goes beyond symbolic relations. In line with media-nature-cultural understanding, aesthetics should be seen, primarily, as aisthesis, as the realm of the sensible and its distribution (Ranciére), most urgently in relation to the suffering brought by the climate emergency, experienced unevenly across the planet. It also needs to refute universalism by seeing the media landscape as uneven and diverse, following a call for Patchy Anthropocene in order to disentangle from the flattening terms of Anthropocene, such as “planetary” (Tsing, Mathews, Bubandt). The idea of a patch, they explain, is borrowed from landscape ecology that understands all landscapes as necessarily entangled within broader matrices of human and nonhuman ecologies.
Speculatively, and trying not to create any more new terms, we might want to designate a kind of spirit tactics for image politics in the Anthropocene discourse, as it requires engagement with images as apparitions of capitalism: acknowledging symbolic power and complications of representation, yet focusing on data structures, on the operational images and infrastructural politics of collective thinking and action. Here, perhaps, a note on the two distinct interpretations of the word 'spirit' is in order: first, understood as 'willpower' or inner determination. Secondly, 'spirit' can refer to the supernatural forces figured as beings or entities that are therefore able to participate in political life and in rituals that activate systems of belief. In other words, we can consider spirit tactics as a proposition for a form of political determination to be actualised within (techno)magic, be it images, alternative imaginaries, portals, diagrams and operationalised ways of embodied thinking (rituals).
(Techno)magic asks for the emergence of layered tactics of image production that allow both for the processes of figuration and the underlying 'invisuality' of what is being figured (e.g. data). Ian Cheng’s work Life After BOB: The Chalice Study (2022), an animation produced in a Unity game engine, presents an interesting consideration for the figuration of technological entities (or even spirits). The work offers a future imaginary of a techno-psycho-spiritual augmentation in a world where “AI entities are permitted to co-inhabit human minds” (Cheng). BOB (Bag of Beliefs) (2018-2019), as the AI system is called, is integrated with the human nervous system. BOB is meant to become a “destiny coach”, acting as a simulation, modelling and advising system that guides humans to probabilistically calculated outcomes during their lifetime.
The protagonist of the film is Chalice Wong - the daughter of the scientist responsible for BOB’s development and the first test subject, augmented with BOB since her birth. BOB and Chalice are bound by a contract that allows BOB to take control of life versions of Chalice in order to lead her down to the best possible life path. Yet as the film progresses, Chalice is depicted as increasingly alienated and discontent as BOB’s quest for the ideal path of self-actualisation takes over her destiny. She gradually becomes a prosthesis for the AI system. Chalice’s father considers “parenting as programming”, but he also treats his daughter’s fate as an experiment to develop BOB into a commercial product. The animation style, colourful, chaotic and glitchy, which is typical for Ian Cheng’s work, does well to represent both the endless variations of the future that BOB calculates in order to secure the best possible one and the hallucinatory moments of Chalice’s consciousness-jumping between her own self and BOB, entangling and disentangling with and from her technological double.
How do we figure our futures from the inside of the capitalist condition in the Anthropocene? Life After BOB: The Chalice Study can be seen as a dark speculation on the instrumentalisation of the human 'connectedness' to the world, a gamified version where human’s worth is measured on the scale stretching from failure to success to self-actualise. The spiritual aspects of Chalice’s journey are shown as completely commercialised: fate, destiny, and willpower are all presented as part of a cognitive product that sees the human body as the latest entity to capitalise on.
One particular aspect of Life After BOB: The Chalice Study is significant for questioning the tactics of visualisation. As the work is completed in the game engine, a lot of the underlying data structure for the animation is not hand-coded but is operationalised through various shortcuts that are usually used in game design. These include light, movement, and glitchy interactions of various objects. Ian Cheng notes that making animation in a game engine is more like creating software, allowing for fast production of iterations of the scenes (Nahari). Furthermore, the prequel to this work, BOB (Bag of Beliefs), was a live simulation of BOB displayed in a gallery as an artificial life entity that could be interacted with. These procedural aspects of visualisation introduce a consideration of underlying processes: even though Life After BOB: The Chalice Study is a recorded animation and not a live simulation like Ian Cheng’s previous works (the Emissary trilogy), the feel of images being driven by computational processes rather than manual aesthetic choices is still retained. In this sense, and also in the narrative choices of a human child augmented by an AI spirit, Life After BOB: The Chalice Study presents interesting considerations for the visuality of (techno)magic as a kind of combinatorial aesthetic figuration that unfolds between the figuration and its underlying infrastructure.
Contemporary technospirits such as Alexas, Siris, Tays and others are not so removed from the imaginary of BOBs. Algorithmic agents, bots and other figured entities participate in the world of aesthetic transactions spun across real and virtual worlds, engaging in relational processes with humans, including a range of interactions and affects. This could be seen in the spirit of procedural animism that ‘emerges exactly as figural tactics; it attends to the “aliveness” with which the algorithmic agents and other figured AIs participate in the contemporary life as represented (and, therefore, as lived, at least in terms of image economy), yet designated to play particular roles within neoliberal structures’ (Anikina 147). The process of figuration can be deployed to different political motivations: the (techno)magical approach would call for alternative figurations, technospirits that enable other environmental, political and cultural futures.
Another tentative tactic that we suggest for this as co-authors of this paper and as a collective is diagrammatic thinking. A diagram, as we see it, can be critical, operative and performative. It can actualise connections and lines of action. A diagram does not represent, but maps out possibilities; a diagram is a display of relations as pure functions. More importantly, a diagram can enable various scaling of possibilities: from individual tactics to mapping out collective action and to infrastructural operation. K Allado-McDowell employs diagrams in Air Age Blueprint. They comment: the task at hand “is not just ecological science but ecology in thought: how do we construct an image of nature with thought - not through representation or translation, but somehow held in the mind in its own right?” (73).
Artist Suzanne Treister maps diagrammatic thinking in Technoshamanic Systems (2020–21). Technoshamanic Systems “presents technovisionary non-colonialist plans towards a techno-spiritual imaginary of alternative visions of survival on earth and inhabitation of the cosmos … [and] encourages an ethical unification of art, spirituality, science and technology through hypnotic visions of our potential communal futures on earth” (Treister). The diagrammatic nodes of Treister’s work underline various forgotten and 'discredited' lines of knowledge, putting together alternative structures that extend both into the genealogy of knowledge and into the potential versions of the contemporary and of the future. In doing so, it achieves a kind of epistemic restoration by implying that these nodes belong to the same planes, categories, surfaces and levels of consideration - a move opposite to epistemic violence and hegemonic narratives.
Figure 1 is a diagram drawn by us that represents the role of magic as an epistemic practice in relation to the embodied interaction of individuals (primarily Western subjects) through the world of late techno-capitalism. They can engage with magic (or (techno)magical rituals) as a relational and embodied epistemic practice; yet what they also face, within the Western epistemic, is an overall loss of capacity for belief, fueled by neoliberal markets and datafication. In this epistemic journey, they have to negotiate the pressure of so-called rationality and the inevitable presence of the ineffable, which can be also very normatively interpreted and captured in the form of popular entertainment, traditional belief systems and even random, sub-individual algorithmicised affects of image flows and audiovisual platforms such as TikTok.
Within the (techno)magical consideration, many various diagrams are possible. The aim behind them is not to stabilise, but to make visible and to multiply alternatives. However, this is just one of many potential “spirit tactics”: our ultimate proposition is to take magic seriously as an ethical and epistemic practice. We appeal to a tentative future: thought becoming operationalised as we engage in thinking-with diagrams and use diagrams as rituals-demarcated-in-space; finding solidarity with our dead - ancestors, but also crude oil - in the face of the Anthropocene; rituals against forgetting; worlding and making technospirits.
Notes
- ↑ And similar paragraphs referring to recent exhibitions can be found in the earlier collections of academic writing; see, for example, “The Machine and the Ghost” (2013).
- ↑ Here, the public presentation is important for the context, so the invention of photograms by Fox Talbot in Britain in 1834 can be omitted.
- ↑ It is not by chance that when this paper was proposed, in the process of development, to the NECS 2023 conference dedicated to the topic of “Care”, it was allocated in the panel “Media, Technology and the Supernatural”.
- ↑ Entheogen means a psychoactive, hallucinogenic substance or preparation, especially when derived from plants or fungi and used in religious, spiritual, or ritualistic contexts.
Works cited
Allado-McDowell. K. Air Age Blueprint. Ignota Books, 2023.
Andersen, Christian Ulrik, and Geoff Cox. "Toward a Minor Tech". A Peer-Reviewed Newspaper, edited by Christian Andersen and Geoff Cox, vol. 12, no. 1, Apr. 2023, p. 1.
Anikina, Alexandra. "Procedural Animism: The Trouble of Imagining a (Socialist) AI". A Peer-Reviewed Journal About, vol. 11, no. 1, Oct. 2022, pp. 134–51.
Anikina, Alexandra, and Yasemin Keskintepe. Personal conversation with Omsk Social Club. 2 Feb. 2023.
Appadurai, Arjun. "Afterword: The Dreamwork of Capitalism". Comparative Studies of South Asia, Africa and the Middle East, vol. 35, no. 3, 2015, pp. 481–85.
Arns, Inke. "Deep Talk Technoschamanismus". Kaput - Magazin Für Insolvenz & Pop, 5 Dec. 2021, https://kaput-mag.com/catch_en/deep-talk-technoschamanism-i_inke-arns_videoeditorial_how-does-it-happen-that-in-the-most-diverse-places-artists-deal-with-neo-shamanistic-practices/.
Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press, 2007.
Braidotti, Rosi. Posthuman Knowledge. John Wiley, 2019.
Brons, Richard. "Reframing Care – Reading María Puig de La Bellacasa 'Matters of Care Speculative Ethics in More Than Human Worlds'". Ethics of Care, 9 Apr. 2019, https://ethicsofcare.org/reframing-care-reading-maria-puig-de-la-bellacasa-matters-of-care-speculative-ethics-in-more-than-human-worlds/.
Campagna, Federico. Technic and Magic: The Reconstruction of Reality. Bloomsbury Publishing, 2018.
Cheng, Ian. Life After BOB: The Chalice Study. Film, 2022, https://lifeafterbob.io/.
Choy, Ka Fai. Tragic Spirits. Audio-visual performance, 2020, https://www.youtube.com/watch?v=lOTOn0WtaFA&ab_channel=CosmicWander%E7%A5%9E%E6%A8%82%E4%B9%A9.
Chéroux, Clément, et al. The Perfect Medium: Photography and the Occult. Yale University Press, 2005.
Clarke, Arthur C. Profiles of the Future: An Inquiry into the Limits of the Possible. Harper & Row, 1973.
de la Bellacasa, María Puig. Matters of Care: Speculative Ethics in More than Human Worlds. University of Minnesota Press, 2017.
Fisher, Mark. Capitalist Realism: Is There No Alternative?. Zero Books, 2009.
Germaine, Chloe. "The Magic Circle as Occult Technology". Analog Game Studies, vol. 9, no. 4, 2022, https://e-space.mmu.ac.uk/631195/3/The%20Magic%20Circle%20as%20Occult%20Technology%20Draft%203%203-10-22.pdf.
Haraway, Donna. The Companion Species Manifesto: Dogs, People, and Significant Otherness. Prickly Paradigm Press, 2003.
Huhtamo, Erkki. "Natural Magic: A Short Cultural History of Moving Images". The Routledge Companion to Film History, edited by William Guynn, Routledge, 2010.
Latour, Bruno. "Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern". Critical Inquiry, vol. 30, no. 2, Jan. 2004, pp. 225–48.
MacCormack, Patricia. The Ahuman Manifesto: Activism for the End of the Anthropocene. Bloomsbury Publishing, 2020.
Mays, Sas, and Matheson, Neil. The Machine and the Ghost: Technology and Spiritualism in Nineteenth- to Twenty-First-Century Art and Culture. Manchester University Press, 2013.
Nahari, Ido. "Empathy Is an Open Circuit: An Interview with Ian Cheng". Spike Art Magazine, 9 Sept. 2022, https://www.spikeartmagazine.com/?q=articles/empathy-open-circuit-interview-ian-cheng.
Olympe Haute, Lucile. "Cyberwitches Manifesto". Cyberwitches Manifesto, 2019, https://lucilehaute.fr/cyberwitches-manifesto/2019-FEMeeting.html.
OMSK Social Club. S.M.I2.L.E.. Volksbühne Berlin, 2019, https://www.omsksocial.club/smi2le.html.
Rancière, Jacques. The Politics of Aesthetics. Bloomsbury Academic, 2018.
Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to Television. Duke University Press, 2000.
Treister, Suzanne. Technoshamanic Systems. Diagrams, 2021-2022, https://www.suzannetreister.net/TechnoShamanicSystems/menu.html.
Tsing, Anna Lowenhaupt, et al. "Patchy Anthropocene: Landscape Structure, Multispecies History, and the Retooling of Anthropology: An Introduction to Supplement 20". Current Anthropology, vol. 60, no. S20, Aug. 2019, pp. S186–97.
Viveiros de Castro, Eduardo. "On Models and Examples: Engineers and Bricoleurs in the Anthropocene". Current Anthropology, vol. 60, no. S20, Aug. 2019, pp. S296–308.
Whitechapel Gallery. "Magic: Documents of Contemporary Art". YouTube, 2021, September 18, https://www.youtube.com/watch?v=pUOYhzZkxwk&ab_channel=WhitechapelGallery.
Alasdair Milne
Lurking in the Gap between Philosophy of Mind and the Planetary
Lurking in the Gap between Philosophy of Mind and the Planetary
Abstract
This article outlines an emerging tendency prominent in the theory and practice of the art & technology domain to ‘horseshoe’ the urgencies of planetary-scale technology with questions traditionally associated with the philosophy of mind, conventionally placed at a much lower level-of-analysis. It delineates and problematises this trend in the theoretical plane, before considering the ‘interpersonal’, stemming from the work of Hannah Arendt, as a mediatory level of analysis, and ground from which to reconcile these contemporary concerns. This intervention acts as a methodological clarification. The implications of this shift are explored for the theorisation of ‘minor tech’ projects as scalable systems which originate at the interpersonal, but can leverage change upscale.
Introduction
Big Theories which engage ‘advanced technologies’ (Serpentine R&D Platform, 2020) in general — and machine learning (ML) in particular — are burdened by ambiguities of scale. In one direction there lies a tendency toward analysing phenomena at the grander macrolevels of ‘planetary computation’ (Hui; Bratton). At the opposite end of the scale, the zoomed - in investigation that characterises ‘mind’ or ‘cognition’, and its technological equivalents, operates in the other (Metzinger; Gamez) characterised by dense metaphysical perplexities. When brought together in the art & technology field to confront seemingly urgent technological problems, the respective complexities and agendas of these distant ‘big’ and ‘small’ scales compound to produce confused conceptualisations of ‘planetary-scale intelligence’. Though the urgency of analysis across scales necessitates such work, the way such scalar reconciliations are performed requires evaluation.
At first this article lays out in more detail this problem of scale, or ‘level of analysis’, for the contemporary theorisation that is implicated in the art and technology domain, assessing the contemporary tendency to pull together scalar extremities without reconciling the critical tensions between them. It then goes on to focus on an intermediate level of analysis, the ‘interpersonal’, tested as a ground from which theories can be built, but also from which these micro and macro level phenomena can be interpreted and assessed more effectively. In all, it proposes a course correction in which these ‘horseshoed’ instances of interscalar theory are mediated by the social domain of the interpersonal.
The scalar horseshoe problem
It might be taken, on the face of it, that discussions of scale presuppose increasingly smaller entities at one end, and increasingly larger entities at the other. Aiming to illustrate a maximally noncontroversial view of this intuitive scalar setup might look like this:
Here our epistemological categories carve up our world along the intuitive lines of ‘big’ (the planetary) through to ‘small’ (the mind) as we know them from our prima facie human (though not necessarily humanist) standpoint. In order to detect, record, measure, and then talk about ‘bigger’ entities, like the ‘planet’, we need to zoom out, through abstraction, in order to comprehend them. This requires losing some granularity. Some entities are arguably too big to quantify and measure in the first place, only knowable through conceptual abstraction (Morton). In order to make sense of the microscopic, we are required to zoom in, thus foregoing the sense of ‘perspective’ that might show us how things fit together.
This intuitive view however assumes that scalar levels can be understood as affixed to what might be called ‘entities’ (‘substances’ or ‘objects’) rather than ‘process’; that ‘bigger’ entities correspond to ‘bigger’ processes, and ‘smaller’ entities to ‘smaller’ processes’. It makes an assumption about ‘bigger’ relations between bigger entities, and ‘smaller’ relations between smaller. It assumes that processes and relations do not transect scales. It also conflicts with emerging tendencies to be found across philosophy, art and technology in which the macroscopic and the minute are sometimes horseshoed into speculations of planetary-scale cognition to compound their urgencies. Here, a lot of the concerns about localised phenomena (human-scale cognition) are imposed upstream on grander-scale infrastructure, a trend also particularly widespread among discussion of the seemingly ‘cognitive’ capabilities of recent large language models (Berardi; Floridi). Here, and most currently, debates surrounding the nature and conditions of ‘thought’ and ‘agency’ are, reasonably, extended from localised agents to planetary-scale infrastructures in order to assess emergent phenomena. But in testing these concepts at the planetary scale, swathes of relational activity, traditionally situated between the cognitive and the planetary, are bypassed, and the role of such social, or interpersonal, processes in constituting global cognitive systems run the risk of being neglected.
This tendency is widespread in the context of artistic practice, and particularly amongst those engaged closely with the overlapping posthumanist (Braidotti) and vibrant materialist paradigms (Bennett). For the sake of good-faith engagement with the problem at hand, I will focus on the upstream theoretical sources that engage and sometimes inspire — often through superficial interpretation — such horseshoeing, since they reveal their rationale explicitly, allowing fuller engagement with its core assumptions. This acts as an alternative to targeting specific art practitioners through an interpretation of their work, where these ideas are deployed in embedded and operative contexts. This methodological decision reflects a commitment to take art practice seriously as a contributor to theoretical discourses, while allowing its practitioners the space to engage in speculative thinking, in this way levelling theoretical questions at the appropriate interlocutors.
The problem
The basic setup here — specifically the scalar gulf that separates questions of ‘mind’ from questions of ‘the planetary’ — tracks with Robin Mackay’s problematization of the relationship between the local and the global scalar levels, in Mackay’s case their invocation in conceptualising so-called ‘site-specific’ art practice. They level a warning that’s relevant to us, outlining
the vague notion that [...] the entire universe is compacted into every site, giving rise to invocations of the type 'we are all made of stardust’ — sentiments which, whether uttered in a mood of wonder or cosmic desolation, effectively put an end to any navigation of the space of knowledge” (Mackay 261).
Here the small contains the big — in fact, every small (or local) entity (in this case ‘site’) contains a totality (the universe) — deeming the ontological status of such claims tautological. If every site contains the universe, each site is rendered equivalent in its potential for artistic and conceptual explication. This tautology from ‘site-specific’ art pairs with another tautology implicit here when Mackay refers to ‘stardust’ — that everything is connected, an intellectual commitment that sometimes troubles vibrant materialist approaches to art & technology (across theory and practice). The question that is derived from this analysis for our purposes then becomes: how do we build a theory of interscalar, planetary technologies in cultural practice which avoid the trap of these ‘universe in every site’ and ‘everything is connected’ tautologies? In light of this, I will consider how such planetary-scale systems of computation can be influenced by non-conglomerate actors — ‘minor’ technologists, artists and theorists working in tandem.
The ‘planetary’ scale
By the admission of one of its most prominent analysts Yuk Hui, ‘planetary’ is ‘largely interchangeable’ with the previously-fashionable and now-laden ‘globalisation’ (Hui). Despite this concern, a good faith rendering of the planetary and its specific conditions can be delineated by turning to the work of Patricia Reed (2019) who offers a deep explication of the concept and its explanatory potential. The scale of the planetary is not simply a replacement for a level of analysis, in her view, but contains within it stacked relations:
The “planetary scale” serves as an initial, terminological index for this big-world condition of coexistential nth dimensionality. Particularly deployed in discourses on climate change and ubiquitous computation throughout the last decade, the planetary scale, in general, describes the consequential magnitude of (some) human techno-economic activity. (Reed)
The planetary scale should not be viewed, from this perspective, as a total abstraction in which the detail is left behind, but a high-dimensional scale which contains, necessarily, lower-dimensional scales, and their activity therein, which constitutes the planetary as a whole. It is also anchored in reference to a particular planet, Earth, inclusive of its biomes and atmosphere, though it leaves open the possibility for extraplanetary and interplanetary analyses. Here, the ‘local’ is shaped by situated relations elsewhere, a highly interdependent plurality which is only contained by the ‘planetary’ as an organising principle: “sites or situations are co-constituted by extra-local relations. There exists an array of contextual conditions that co-produce any instance of localization” (Reed). Importantly, viewing the planetary as this navigable scalar stack makes possible the containing of intermediary levels of analysis, and a framework through which to map the relations that transect them.
Such a view of the planetary might complicate scales by acknowledging, for example, the leverage that cognitive decision-making in the sphere of politics might have for planetary-scale entities, even prior to the emergence of intervening technologies. Moving across scales is a phenomenon that takes place both in practice, then, and analytically, when we build theories or narratives to account for the interaction that takes place across Reed’s ‘nth’ dimensions. The problem is not then working across scales, or that the planetary abstracts away the possibility of engaging those intermediary levels; but rather, how those scales are moved across, and what is carried. These mediate not just the inter-locality that she argues for, but also play an explanatory purpose which can help to account for how such planetary scale systems emerge out of said interdependent localities. This will be addressed shortly in the section on the ‘interpersonal’.
The ‘mind’ scale
Often, to explain emerging technological phenomena, such thinking turns to ‘small’-scalar concepts, frameworks, or even simply assumptions, from the philosophy of mind — or indeed neuroscience (VanRullen and Kanai) as an interdependent field — the disciplines(s) best tooled to think about questions of agency viz. sentience, consciousness and more precise concepts of ‘thinking’ in general. These invocations are not always carefully deployed and integrated with planetary speculations however. We can analyse a viable theory which performs this operation by turning to Global Workspace Theory (GWT), as just one recent, and technically relevant, example that brings questions of mind into the realm of planetary-scale ML computation. It imports, from a specific neuroscientific model, that
shared information at each moment in time — the global workspace — is what constitutes our conscious awareness. In functional terms, the global workspace can serve to resolve problems that could not be solved by a single specialized function, by coordinating multiple specialized modules (VanRullen and Kanai 1).
As such GWT is implicitly physicalist insofar as it commits us to the position: if consciousness is an emergent property of complex material organisation which spans multiple functional zones, then we should consider the possibility that consciousness might emerge out of a global infrastructure of machine learning. Though GWT should not be inherently conflated with a total planetary intelligence as argued for elsewhere (Frank et al.), the type of ‘conscious awareness’ proposed by this position is planetary-in-scale because of the infrastructures required to support such constituent neural networks which compose the Global Workspace. This would be constituted by not just the active neural networks that are taken as the core of such technology, but further by the much more expansive support structures (Mackenzie, 2017: 23) the substrate of data production and scraping which human populations contribute to. GWT is just one of many approaches to considering the planetary computational viz. the qualities or capacities of mind. Such speculations are also well-represented in posthumanist literatures, one compelling example being Betti Marenko’s "Hybrid Animisms" (Marenko 7). Here, Marenko speculates the possibility of more complex relations of computational and human mind though distributed and planetary infrastructures: “assemblages have become us, in a milieu of organic, nonorganic, human, nonhuman, carbon, silicon, atoms, bits, which is creating an ‘incipient machinic sensate world', a world which is both sensing and sentient" (Marenko 12). Marenko’s federated view of the planetary-in-scale operable system reflects a broader shift in artistic speculation, seeing technology not as discrete tools, but rather part of an intraoperative whole.
Such huge scalar leaps between the ‘planetary’ scales of contemporary computation and the functions traditionally ascribed to ‘mind’ might reasonably concern the ‘minor’ technologist though, as well as a broader base of critics sceptical of such conceptual manoeuvres, given that it makes technology the domain in which these highly abstract and sometimes obtuse philosophical debates are being conducted. This is not to suggest that a planetary-scale view of computation more generally is somehow inaccurate, but rather that, regardless of the position, some explanatory theory at lower scalar levels is necessary to reach such conclusions. Here, a different level of granularity from which we can build theories of human-computational interactivity is needed, through which the jump from the processes of ‘mind’ and the planetary-scale computational infrastructure which demands theorisation, can be linked or grounded? The ‘interpersonal’ has been considered elsewhere by Jeremy Bendik-Keymer (2020) as a response to the planetarisation of thought, but here is forwarded as a mediator and starting point for course correcting this horseshoeing trend amongst the art & technology field.
The interpersonal
Not all theories which contend with the culture of planetary-scale technological infrastructures depart from the macrolevel. Hannah Arendt, considered here via Patrick Hayden’s (2015) reading, posits that human activity is situated in the interdependent field of “the space of appearances” in which thought and deliberation take place as common activities. According to Arendt ‘labour’ — the cyclical toil that provides us with sustenance — and ‘work’ — the processes through which we co-constitute the world — are distinct (Canovan and Arendt ix). ‘Work’ is in part the building of a common technological infrastructure — “a composition of human artifice” built together through ‘work’ (Hayden, 2015: 754) — which can be understood in some holistic sense, like the ‘planetary’ as per Reed, except that its construction takes place within a more local frame of reference that we not only understand, but iteratively build and occupy. ‘Work’ forms a common ground also for thinking through the reciprocal relationship between cultural production, which renders our world a particular way, and the building of technological infrastructures, which shapes the way this cultural production takes place. Both are brought into being through collaborative engagement, requiring multiple hands for each iterative component, as well as the accumulative production of Arendt’s composite human artifice.
This frame of reference — or rather scalar level of analysis — can be identified as the interpersonal, encompassing the social and productive relations that take place outside of, and between, our introspective selves (i.e. beyond the bounds of ‘mind’ as conventionally conceived). Analysing these relations (Drichel) as interactions, exchanges, and collaborations in work and labour all help to account, at a more granular level, where a planetary-scale computational system comes from and how it operates. It also enables the human contributions to such planetary systems, that appear and are sometimes analysed as autonomous (Bratton, The Terraforming 13), to be made visible, not in the interests of arguing for some universal truth about the nature of human-machine collaboration, but to render the specific human contributions to specific computational systems which compose any arguably overarching planetary superstructure like those we’ve seen speculated.
Locating ‘work’ within the interpersonal allows us to identify where lower-level processes interface with grander-planetary-infrastructures. Downscale work constitutes upscale infrastructure, which then determines the conditions for the world in which we subsequently live and work. Arendt calls this the ‘objective’ world that sits between humans and nature:
Against the subjectivity of men stands the objectivity of the man-made world rather than the sublime indifference of an untouched nature [...] Only we who have erected the objectivity of a world of our own from what nature gives us, who have built it into the environment of nature so we are protected from her, can look upon nature as something "objective" (Arendt, 137).
Though on first reading this view of the constructed infrastructure of humanity appears to aggrandise, here Arendt is pointing out that such infrastructure is a part of, and interfaces, humanity with, the natural world — “built it into” — our shared environment from which any infrastructure fundamentally derives. But more importantly is to understand this ‘objective’ constructed world that we live in not in the epistemological sense, but as something that shapes our existence in the same way as the ‘natural’ components of our environment. Once we build it, it is there, and we must live with it or attempt collectively to reshape it, through work or political action. Any planetary-scale computation, then, comes from our collective work, but must subsequently be worked with.
This view of the interpersonal appears compatible with Reed’s conception of the planetary, then, who aligns on this question of ‘situatedness’ of human processes in interscalar existence. But it insists on a bottom-up approach to understanding the processes of building a planetary-scale computation. To be sure, the nonanthropogenic processes that enable this — geological formation of the natural resources which are shaped into such computational, for example — are best accounted for across planetary-scale geological time in the first instance, as Arendt suggests when she argues that:
material is always a product of human hands which have removed it from its natural location [such as] interrupting one of nature’s slower processes, as in the case of iron, stone, or marble torn out of the womb of the earth. This element of violation and violence is present in all fabrication, and homo faber, the creator of the human artifice, has always been a destroyer of nature. (Arendt 139)
But this extraction is an empirical matter of historical record, and though planetary in scale is totally distinct as a claim from the speculations of emergent cognitive phenomena across infrastructure, for example. This is perhaps where scalar distinctions are best made then: from the point of view of analysing such divergent processes, despite the contingency of one (planetary computation) on the other (geological mineral formation and extraction). This interscalar dependency is also where ‘level of analysis’ diverges from any ontological argument about scalar levels which can be “carved at their joints” since the levels of analysis we use to examine different processes must best serve that analysis.
Leveraging change upscale
We come to know the Planetary, then, through discursive and interpersonal work, in which lower “nth-dimensional” levels give insight into the construction of higher ones. This is also where we build the ‘total artifice’ of planetary-scale computation, creating infrastructure at scale incrementally, piece by piece. Thinking our contemporary technological circumstances through this set-up, in which technological ‘work’ takes place in our midst, though often behind closed doors, might lead us to ask why we often focus on understanding technology at the planetary scale in the first instance. I would suggest that this tendency comes from seeing technology as an artefact or abstract condition to be evaluated in postproduction rather than a distributed and simultaneous field of research & development which can itself be entered — this behind-the-scenes is discussed in the Creative AI Lab as the “back-end” (Bunz and Jäger). The barrier to access then becomes a practical and methodological one then rather than an ontological impasse. This is not to say that we don’t engage in analysis across scales, but rather that we can share a ground with such technology and it’s developmental contexts.
If we adopt this Arendtian framing, then we can shift to seeking access (the practical) and identifying how to build an analysis (the methodological). If we want to understand technological developmental work at the scale of the conglomerates — which is vital — we must follow in Jaton’s footsteps, seeking permission to access their personnel and environs (Jaton). But if we are interested in the systems built by artists, we should seek the hospitality instead of artists themselves, and engage in R&D processes that brings them into the world, as well as the institutions that sometimes house the most intensive technical research practices. Here, we move from critique, an inheritance of an art historical discipline tooled for a different time, to an engagement with the process of production which is more granular, and perhaps even reciprocal.
In refocusing our analysis on the work that happens at the interpersonal level, though, there remains the problem of executive management over such systems of technical work. Nation states have political leaders; technology companies have CEOs; universities have chancellors and boards, with funding bodies upstream. These exert pressure on the kind of ‘work’ accounted for here, applying political pressure, command and control, and funding constraints which determine how work is carried out. ‘Minor tech’ projects in general, and art & technology examples in particular, offer a pathway out of these downward excursions of creative control. Though working outside of corporate contexts profoundly reduces the resourcing available to practitioners, the purposes of such work is also different. Firstly, these projects are characterised by different delivery pressures: new technical systems here become an end in themselves, their development not beholden to performance metrics defined by profitability. Secondly, they can become incubators for thinking which is developed more horizontally between those involved, allowing the executive function, design and decision-making to become federated and localised.
Finally, and perhaps most importantly, these ‘minor’ artists’ projects act as subsystems (or countersystems) within a corporate-dominated landscape of technical R&D: what Meadows (1999) calls a ‘leverage point’ which can initiate broader change. This can take many forms. Artists (and their collaborative teams) can develop new ideas to then seek scalable funding streams; they can produce prototypes that become exemplars, acting as a proof-of-concept for alternatives to the naturalised systems of the inherited Trad Web. An artists’ system or platform can be bootstrapped by a community of engaged users (see Hivemind, 2022) more effectively than trying to present it from an early stage as monetizable. Once operable, a ‘minor’ technological system, positioned in this way as a proof-of-concept, is exactly what is required to undermine the hegemonic platforms that seem beyond competition. Though they might never scale to serve mass markets, their adoption by smaller communities offers the possibility of a comparatively more ‘organic’ growth pattern, or no growth at all, remaining the domains of specific subcultures. While such systems (autonomous of hegemonic platforms) are a promise more aligned with blockchain infrastructures, the servicing of smaller communities reignites this possibility for ‘scalability’, originally the promise of capitalism, negated by the market’s capture. Here then we zoom out again, from mapping the artist’s system as delimitable, to situating each as an enactive subsystem within a broader systemic landscape; perhaps what might now be the Arendtian artifice. Remembering that the action takes places at the interpersonal level, though, should give us hope that change can be leveraged upscale.
Thus, when we leverage change from the interpersonal scale to the state or planetary-scale through minor projects, we are engaging in what Ray Brassier calls the “collective self-mastery” required for true “self-governance” (Brassier 74). In this respect, the project of building anything ‘planetary’ — infrastructure, culture, politics, or systems which combine both — is an incremental one. The interpersonal and planetary should be seen as co-constituted, as well as approximately and imperfectly mappable. As such, the horseshoeing of mind and the planetary which inspired these reflections becomes a worthwhile critique of the field if, and only if, we go on to see the mechanisms of ‘mind’ as they are embedded in the social interactions of the interpersonal. It is in this way that we break the initial tautology of ‘the universe in every site’ from Mackay. Seeing such mechanisms embedded contextually, again as in Reed, means we might be better prepared with a framework for thinking through the cognitive implications of new technologies. This allows space for the relational elements of posthumanist approaches, like Marenko’s, to remain profoundly important, while also subject to good faith critique as part of a wider discourse.
My purpose here has been to share some thinking on the conceptually-grounded methodological struggles of theorybuilding in evasive empirical contexts such as ML corporate and artistic development, where the stakes seem high but technical access can be elusive. The purpose of departing from the ‘interpersonal’ is to provide a starting point for establishing where to look in trying to understand planetary-scale technologies that have metaphysical implications. But undertaking such analysis allows the carrying forward of particular elements of each of these scalar approaches, while grounding them somewhere empirically more verifiable. This helps us to reduce abstraction by attempting to build theories from the ontological level that we are most accustomed to, while delivering a framework which can bear the abstraction necessary to discuss these urgent questions.
Works cited
Arendt, Hannah. The Human Condition. 2nd ed, University of Chicago Press, 1998.
Bendik-Keymer, Jeremy. "'Planetarity,' 'Planetarism,' and the Interpersonal". E-Flux Notes, 27 May 2020, https://www.e-flux.com/notes/434304/planetarity-planetarism-and-the-interpersonal.
Bennett, Jane. Vibrant Matter A Political Ecology of Things. Duke University Press, 2010. Open WorldCat, https://muse.jhu.edu/book/70734/.
Berardi, Franco “Bifo”. "Unheimlich: The Spiral of Chaos and the Cognitive Automaton". E-Flux, 10 Mar. 2023, https://www.e-flux.com/notes/526496/unheimlich-the-spiral-of-chaos-and-the-cognitive-automaton.
Braidotti, Rosi. The Posthuman. Polity Press, 2013.
Brassier, Ray. "Prometheanism and Real Abstraction". Speculative Aesthetics, edited by Robin Mackay, Urbanomic, 2018, pp. 73–77. Open WorldCat, http://sbiproxy.uqac.ca/login?url=https://international.scholarvox.com/book/88865143.
Bratton, Benjamin H. "'New World Order': For Planetary Governance". Strelka Mag, 11 Mar. 2021, https://strelkamag.com/en/article/new-world-order-for-planetary-governance.
---. The Terraforming. Strelka Press, 2019.
Bunz, Mercedes, and Eva Jäger. "Inquiring the Backends of Machine Learning Artworks: Making Meaning by Calculation". https://www.cityu.edu.hk/artmachines2/symposium-programme#collapseOne. School of Creative Media, City University of Hong Kong.
Canovan, Margaret, and Hannah Arendt. "Introduction". The Human Condition, 2nd ed, University of Chicago Press, 1998, pp. xii–xx.
Drichel, Simone. "Relationality". Angelaki, vol. 24, no. 3, May 2019, pp. 1–2. Taylor and Francis+NEJM, https://doi.org/10.1080/0969725X.2019.1620445.
Floridi, Luciano. "AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models". Philosophy & Technology, vol. 36, no. 1, Mar. 2023, p. 15. Springer Link, https://doi.org/10.1007/s13347-023-00621-y.
Frank, Adam, et al. "Intelligence as a Planetary Scale Process". International Journal of Astrobiology, vol. 21, no. 2, Apr. 2022, pp. 47–61. Cambridge University Press, https://doi.org/10.1017/S147355042100029X.
Gamez, David. Human and Machine Consciousness. Open Book Publishers, 2018.
Hayden, Patrick. "From Political Friendship to Befriending the World". The European Legacy, vol. 20, no. 7, Oct. 2015, pp. 745–64. DOI.org (Crossref), https://doi.org/10.1080/10848770.2015.1069082.
Hui, Yuk. "For a Planetary Thinking". E-Flux, Dec. 2020, https://www.e-flux.com/journal/114/366703/for-a-planetary-thinking/.
Jaton, Florian. The Constitution of Algorithms: Ground-Truthing, Programming, Formulating. The MIT Press, 2021.
Mackay, Robin, ed. When Site Lost the Plot. Urbanomic Media Ltd, 2015.
Mackenzie, Adrian. Machine Learners: Archaeology of a Data Practice. The MIT Press, 2017.
Marenko, Betti. "Hybrid Animism: The Sensing Surfaces Of Planetary Computation". New Formations, vol. 104, no. 104–105, Dec. 2021, pp. 183–97. IngentaConnect, https://doi.org/10.3898/NEWF:104-105.08.2021.
Meadows, Donella H. Leverage Points: Places to Intervene in a System. The Sustainability Institute, 1999, https://donellameadows.org/a-visual-approach-to-leverage-points/.
Metzinger, Thomas. Being No One: The Self-Model Theory of Subjectivity. The MIT Press, 2004.
Morton, Timothy. Hyperobjects: Philosophy and Ecology after the End of the World. University of Minnesota Press, 2013.
Reed, Patricia. "Orientation in a Big World: On the Necessity of Horizonless Perspectives". E-Flux Summer 2019, https://www.e-flux.com/journal/101/273343/orientation-in-a-big-world-on-the-necessity-of-horizonless-perspectives/.
Serpentine R&D Platform, and Rival Strategy, eds. Future Art Ecosystems: Issue 1. Art x Advanced Technologies. Serpentine R&D Platform, 2020, https://serpentine-uploads.s3.amazonaws.com/uploads/2020/07/Future-Art-Ecosystems-1-Art-and-Advanced-Technologies_July_2020.pdf.
VanRullen, Rufin, and Ryota Kanai. Deep Learning and the Global Workspace Theory. arXiv:2012.10390, arXiv, 19 Feb. 2021. arXiv.org, https://doi.org/10.48550/arXiv.2012.10390.
Susanne Förster
The Bigger the Better?!
The Size of Language Models and the Dispute over Alternative Architectures
The Bigger the Better?! The Size of Language Models and the Dispute over Alternative Architectures
Abstract
This article looks at a controversy over the ‘better’ architecture for conversational AI that unfolds initially along the question of the ‘right’ size of models. Current generative models such as ChatGPT and DALL-E follow the imperative of the largest possible, ever more highly scalable, training dataset. I therefore first describe the technical structure of large language models and then address the problems of these models which are known for reproducing societal biases or so-called hallucinations. As an ‘alternative’, computer scientists and AI experts call for the development of much smaller language models linked to external databases, that should minimize the issues mentioned above. As this paper will show, the presentation of this structure as ‘alternative’ adheres to a simplistic juxtaposition of different architectures that follows the imperative of a computable reality, thereby causing problems analogous to the ones it tried to circumvent.
In recent years, increasingly large, complex and capable machine learning models such as the GPT model family, DALL-E or Stable Diffusion have become the super trend of current (artificially intelligent) technologies. Trained on identifying patterns and statistical features and thus intrinsically scalable, the potential of large language models is seen as based on their generative capabilities to produce a wide range of different texts and images.
The monopolization and concentration of power within a few big tech companies such as Google, Microsoft, Meta and OpenAI that accompanies this trend is promoted by the enormous economic resources afforded by the models’ training processes (see Luitse and Denkena). The risks and dangers of this big data paradigm have been stressed widely: The working conditions and invisible labor that goes into the creation of AI and ensures its fragile efficacy has been addressed in the context of click-work or content moderation (f.e., Irani; Rieder and Skop). In Anatomy of an AI System, Kate Crawford’s and Vladan Joler (Crawford and Joler) detailed the material setup of a conversational device and traced the far fetching origins of its hardware components and working conditions. Critical researchers have also pointed out how the composition of training data has resulted in the reproduction of societal biases. Crawled from the Internet, the data and thus the generated language mainly represent hegemonic identities whilst discriminating against marginalized ones (Benjamin). Moreover, the infrastructure needed to train these models requires huge amounts of computing power and has been linked to a heavy environmental footprint: The training of a big Transformer model emitted more than 50 times the amount of carbon dioxide than an average human per year (Strubell et al., Bender et al.). Criticizing this seemingly inevitable turn to ever larger language models and the far-reaching implications of this approach for both people and the environment, Emily Bender et al., published their now-famous paper On the Dangers of Stochastic Parrots: Can Language Models be Too Big? in March 2021 (Bender et al.). Two of the authors, Timnit Gebru and Margaret Mitchell, both co-leaders of Google’s Ethical AI Research Team, were fired after publishing this paper against Google’s veto.
The dominance of the narrative of "scalability, [...], the ability to expand - and expand, and expand" (Tsing 5) deep learning models – especially by big tech companies – has clouded the view for alternative approaches. With this paper, I will look at claims and arguments for different architectures of conversational AI by first reconstructing the technical development of generative language models. I will further trace the reactions to errors and problems of generative large language models and the dispute over the ‘proper’ form of artificial intelligence between proponents of connectionist AI and machine learning approaches on the one side and those of symbolic or neurosymbolic AI defending the need for ‘smaller’ language models linked to external knowledge databases on the other side. This debate represents a remarkable negotiation about forms of ‘knowledge representation’ and the question of how language models should (be programmed to) ‘speak’.
Initially, the linking of smaller language models with external databases promising accessibility, transparency and changeability had subversive potential for me because it pledged the possibility of programming conversational AI without access to the large technical infrastructure it would take to train large language models (regardless of whether those models should be built at all). As I will show in the following, the hybrid models presented as an alternative to large language models also harbor dangers and problems, which are particularly evident in an upscaling of the databases.
In need of more data
Since its release in November 2022, the dialogue-based model ChatGPT generated a hype of unprecedented dimensions. Provided with a question, exemplary text or code snippet, ChatGPT mimics a wide range of styles from different authors and text categories such as poetry and prose, student essays and exams or code corrections and debug logs. Soon after its release, the end of both traditional knowledge and creative work as well as classical forms of scholarly and academic testing seemed close and were heavily debated. Endowed with emergent capabilities, the functional openness of these models is perceived as both a potential and a problem as they can produce speech in ways that appears human but contradicts human expectations and sociocultural norms. ChatGPT was also called a bullshit generator (McQuillan): Bullshitters, as philosopher Harry Frankfurt argues, are not interested in whether something is true or false, nor are they liars who would intentionally tell something false, but are solely interested in the impact of their words (Frankfurt).
Generative large language models such as OpenAI’s GPT model family or Google’s BERT and LaMDA are based on a neural network architecture – a cognitivist paradigm based on the idea of imitating the human brain logically-mathematically and technically as a synonym for "intelligence", but usually without taking into account physical, emotional and social experiences (see Fazi). In the connectionist AI approach, ‘learning’ processes are modeled with artificial neural networks consisting of different layers and nodes. They are trained to recognize similarities and representations within a big data training set and compute probabilities of co-occurrences of individual expressions such as images, individual words, or parts of sentences. After symbolic AI was long considered as the dominant paradigm, the "golden decade" of deep neural networks – also called deep learning – dawned in the 2010s, according to Jeffrey Dean (Dean). 2012 is recognized as the year in which deep learning gained acceptance in various fields: On the one hand, the revolution of speech recognition is associated with Geoff Hinton et al., on the other hand, the winning of the ImageNet Large Scale Visual Recognition Challenge with the help of a convolutional neural network represented a further breakthrough (Krizhevsky et al.). Deep learning neural networks with increasingly more interconnected nodes (neurons) and layers and powered by newly developed hardware components enabled huge amounts of compute power became the standard.
Another breakthrough is associated with the development of the Transformer Network architecture, introduced by Google in 2017. The currently predominant architecture for large language models is associated with better performance due to a larger size of the training data (Devlin et al.). Transformers are characterized in particular by the fact that computational processes can be executed in parallel (Vaswani et al.), a feature that has significantly reduced the models’ training time. Building on the Transformer architecture, OpenAI introduced the Generative Pre-trained Transformer model (GPT) in 2018, a deep learning method which again increased the size of the training datasets (Radford et al., “Improving Language Understanding”). Furthermore, OpenAI included a process of pre-training, linked to a generalization of the model and an openness towards various application scenarios, what is thought to be achieved through a further step of optimization, i.e., the fine-tuning. At least with the spread of the GPT model family, the imperative of unlimited scalability of language models has become dominant. This was especially brought forward by Physics (Associate) Professor and Entrepreneur Jared Kaplan and OpenAI, who identified a set of ‘scaling laws’ for neural network language models, stating that the more data available for training, the better the performance thereof (Kaplan et al.). OpenAI has continued to increase the size of its models: While GPT-2 with 1.5 billion parameters (a type of variable learned in the process of training) was 10 times the size of GPT-1 (117 million parameters), it was far surpassed by GPT-3 with a scope of 175 trillion parameters. Meanwhile, OpenAI has transformed from a startup promoting the democratization of artificial intelligence (Sudmann) to a 30 billion dollar company (Martin) and from an open source community to a closed one. While OpenAI published research papers with the release of previous models describing the structure of the models, the size and composition of the training data sets, and the performance of the models in various benchmark tests, much of this information is missing from the paper on GPT-4.
On errors and hallucinations
Generative language models, however, are being linked – above all by developers and computer scientists – to a specific kind of ‘error’: “[I]t is also apparent that deep learning based generation is prone to hallucinate unintended texts”, Ji et al. write in a review article collecting research on hallucination in natural language generation (Ji et al.). According to the authors, the term hallucination has been used in the field of computer visualization since about 2000, referring to the intentionally created process of sharpening blurred photographic images, and only recently changed to a description of an incongruence between image and image description. Since 2020, the term has also been applied to language generation, however not for describing a positive moment of artificial creativity (ibid.): Issued texts that appear sound and convincing in a real-world context, but whose actual content cannot be verified, are referred to by developers as ‘hallucinations’ (ibid., 4). In this context, hallucination refers not only to factual statements such as dates and historical events or the correct citation of sources; it is equally used for editions of non-existent sources or the addition of aspects in a text summary. While the content is up for discussion, the language form may be semantically correct and convincing, resulting in an apparent trust in the model or its language output.
For LeCun, Bengio and Hinton, “[r]epresentation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level.” (LeCun et al. 436). In technical terms, hallucination thus refers to a translation or representation error between the source text or ‘raw data’ [sic] on the one hand and the generated text, model prediction or ‘representation’ on the other. Furthermore, another source of hallucinations is located in outdated data, causing the (over time) increasing production of factually incorrect statements. This ‘error’ is explicitly linked to the large scale of generative models: Since the training processes of these models are complex and expensive and thus seldomly repeated, the knowledge incorporated – generally – remains static (Ji et al.) However, with each successive release of the GPT model family, OpenAI proclaims further minimization of hallucinations and attempts to prevent programs from using certain terms or making statements that may be discriminatory or dangerous, depending on the context, through various procedures that are not publicly discussed (see Cao).
From the definitions of representation learning, hallucination, and the handling of this 'error', a number of conclusions can be drawn that are instrumental to the discourse on deep learning and artificial intelligence: The representation learning method assumes that it does not require any human intervention to recognize patterns in the available data, to form categories and make statements that are supposed to be consistent with the information located in the data. Both the data and the specific outputs of the models are conceived as universally valid. In this context, hallucination remains a primarily technical problem presented as technically solvable, and in this way it is closely linked to a promise of scaling: With the reduction of (this) error, text production seems to become autonomous, universal, and openly applicable in different settings.
On data politics
The assumption that data represent a ‘raw’ and objective found reality, which can be condensed and generated into a meaningful narrative through various computational steps, has been criticized widely (e.g. Boellstorff; Gitelman and Jackson). It is not only the composition of the data itself that is problematic, but equally the categories and patterns of meaning generated by algorithmic computational processes, which reinforce the bias – inevitably (see Jaton) – found in the data and make it once more effective (Benjamin; Noble). Technical computations adhere to an objectivity and autonomy that pushes human processes of selection and interpretation of the data into the background, presenting them instead as ‘found’ and ‘closed’ (e.g., boyd and Crawford; Kitchin). Building on a rich tradition of science and technology studies that highlighted the socio-technical co-production of human, natural and technical objects (f.e. Knorr Cetina, Latour and Woolgar), Adrian Mackenzie has introduced the term 'machine learner' to refer to the entanglement of "humans and machines or human-machine relations […] situated amidst these three accumulations of settings, data and devices" (Mackenzie 23).
"[Big] data," as Taş writes, "are a site of political struggle." (Taş 569). This becomes clear not only through the public discussion of generative models and the underlying question of which statements language models are allowed to make. At the latest with the release of ChatGPT in November 2022, it was publicly debated which responses of the model were considered unexpected, incorrect or contrary to socio-cultural norms. Generative models have been tested in a variety of ways (Marres and Stark): The term 'jailbreaking' for example, denotes a practice in which users attempt to trick the model to create outputs that are restrained by the operating company's policy regulation. These include expressions considered as discriminating and obscene or topics such as medicine, health or psychology. In an attempt to circumvent these security measures, jailbreaking exposes the programmed limitations of the programs. Moreover, it also reveals what is understood by the corporations as the ‘sayable’ and the ‘non-sayable’ (see Foucault). This is significant insofar as these programs have already become part of everyday use, and the norms, logics, and limits inherent in them have become widely effective. In only five days after its release, ChatGPT had already reached one million users (Brockman). As foundation models (Bommasani et al.), OpenAI's GPT models and DALL-E are built into numerous applications, as are Google's BERT and LaMDA. Recently, the use of ChatGPT by a US lawyer or the demand to use the program in public administration (Armstrong; dpa/lno) was publicly discussed. These practices and usage scenarios make it clear that – practically – generative models represent technical infrastructures that are privately operated and give the operating big tech companies great political power. The associated authority in defining the language of these models but also in guiding politics recently became visible in a number of instances:
In an open letter, published in March 2023 on the website of the Future of Life Institute, AI researchers including Gary Marcus, Yoshua Bengio, and Yann LeCun – the latter working for Meta – as well as billionaire Elon Musk, urged for a six-month halt of training of models larger than GPT-4 (Future of Life Institute, “Pause Giant AI Experiments”). “Powerful AI systems”, they wrote, “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” (ibid.), referring to actual and potential consequences of AI technology, such as the spread of untrue claims or the automation and loss of jobs. Also arguing with the creation of fake content, impersonation of others, and on the assumption that generated text is indistinguishable from that of human authors, OpenAI had initially restricted access to GPT-2 in 2019 (Radford et al., "Better Language Models"). Both the now more than 31,000 signatories of the open letter (as of June 2023) and OpenAI itself argue not against the architecture of the models, but for the use of so-called security measures. The Future of Life Institute writes in its self-description: “If properly managed, these technologies could transform the world in a way that makes life substantially better, both for the people alive today and for all the people who have yet to be born. They could be used to treat and eradicate diseases, strengthen democratic processes, and mitigate - or even halt - climate change. If improperly managed, they could do the opposite […], perhaps even pushing us to the brink of extinction.” (Future of Life Institute, “About Us”).
As this depiction richly illustrates, the Future of Life Institute is an organization dedicated to ‘long-termism’, an ideology that promotes posthumanism and the colonization of space (see MacAskill), rather than addressing the multiple contemporary crises (climate, energy, corona pandemic, global refugee movements, and wars) promoted by global financial market capitalism that profoundly reinforce social inequalities. Moreover, "AI doomsaying," i.e., the narrative of artificial intelligence as an autonomously operating agent whose power grows with access to more and more data and ever-improving technology, and whose workings remain inaccessible to human understanding as a black-box, further enhances the influence and power of big tech companies by attributing to their products the power "to remake - or unmake - the world." (Merchant).
On the linking of language models and databases
Taking up criticism of large language models such as the ecological and economic costs of training or the output of unverified or discriminating content, there are debates and frequent calls to develop fundamentally smaller language models (e.g., Schick and Schütze). Among others, David Chapman, who together with Phil Agre developed alternatives to prevailing planning approaches in artificial intelligence in the late 1980s (Agre and Chapman), recently called for the development of the ‘smallest language models possible’: "AI labs, instead of competing to make their LMs bigger, should compete to make them smaller, while maintaining performance. Smaller LMs will know less (this is good!), will be less expensive to train and run, and will be easier to understand and validate." (Chapman). More precisely, language models should "'know' as little as possible-and retrieve 'knowledge' from a defined text database instead." (ibid.). In calling for an architectural separation of language and knowledge, Chapman and others tie in with long-running discussions in phenomenology and pragmatism as well as those in formalism and the Theory of Mind.
Practices of data collection, processing and analysis are ubiquitous. Accordingly, databases are of great importance as informational infrastructures of knowledge production (cf. Nadim). They are not only "a collection of related data organized to facilitate swift search and retrieval" (ibid.), but also a "medium from which new information can be drawn and which opens up a variety of possibilities for shape-making" (Burkhardt, 15, my translation). Lev Manovich, in particular, has emphasized the principle openness, connectivity and relationality of databases (Manovich). In this view, databases appear as accessible and explicit, allowing for an easy interchangeability and expansion of entries, eventually permitting an upscaling of the entire architecture. Databases have been an important component of symbolic AI - also known as Good Old-Fashioned AI (GOFAI). While connectionist AI takes an inductive approach that starts from "available" data, symbolic AI is based on a deductive, logic- and rule-based paradigm. Matteo Pasquinelli describes it as a "top-down application of logic to information retrieved from the world" (Pasquinelli 2). Symbolic AI has become known, among other things , as a representation of ontologies or semantic webs.
Linking external databases with small and large language models emerges as a concrete answer to the problems of generative models, in which knowledge is understood as being ‘embedded’, and which – as illustrated by the example of hallucination – leads to various problems. While connectionist approaches have dominated in recent times, architectures of symbolic AI seem to reappear. The combination of databases and language models is already a common practice and currently discussed under the terms ‘knowledge-grounding’ or ‘retrieval augmentation’ (f.e. Lewis et al.). Retrieval-augmented means that in addition to fixed training datasets, the model also draws on large external datasets, an index of documents whose size can run into the trillions of documents. Meanwhile, models are called small(er) as they contain a small set of parameters in comparison to other models (Izacard et al.). In a retrieval process, documents are selected, prepared and forwarded to the language model depending on the context of the current task. With this setup, the developers promise improvements in efficiency in terms of resources such as the amount of parameters, ‘shots’ (the amount of correct information in the data sets), and corresponding hardware resources (ibid.).
In August 2022, MetaAI has already released Atlas, a small language model that was extended with an external database and which, according to the developers, outperformed significantly larger models with a fraction of the parameter count (ibid.). With RETRO (Retrieval-Enhanced Transformer), DeepMind has also developed a language model that consists of a so-called baseline model and a retrieval module. (Borgeaud et al.). In 2017, ParlAI, an open-source framework for dialog research founded by Facebook in 2017, presented Wizard of Wikipedia, a program – a benchmark task – to train language models with Wikipedia entries (Dinan et al.). They framed the problem of hallucination of, in particular, pre-trained Transformer models as one of updating knowledge. With this program, models are fine-tuned to extract information from database articles to be then casually inserted into a text or conversation without sounding like an encyclopedia entry themselves, thereby appearing semantically and factually correct. With the imagining of small models as ‘free of knowledge’, the focus changes: now not only size and scale are considered a marker of performance, but also the infrastructural and relational linking of language models to external databases. This linking of small language models to external databases thus represents a transversal shift in scale: While the size of the language models is downscaled, the linking with databases implies a simultaneous upscaling.
However, the ideal of an accessible and controllable database falls short where it is conceived as potentially endlessly scalable. It is questionable whether a possibly limitless collection of knowledge is still accessible and searchable or whether it does not transmute into its opposite: "When everything possible is written, nothing is actually said (Burkhardt 11, my translation). What prior knowledge of the structure and content of the database would accessibility require? The conditions of its architecture and the processes of collecting, managing and processing the information are quickly forgotten (ibid. 9f.) and obscure the fact that databases as sites of power also are exclusive and always remain incomplete. Inherent in the idea of an all-encompassing database is a universalism that assumes a generally valid knowledge and thus fails to recognize situated, embodied, temporalized, and hierarchized aspects. Following Wittgenstein, Daston has likewise illustrated that even (mathematical) rules are ambiguous and, as practice, require interpretation of the particular situation (Daston 10).
On disputes over better architectures
The narrative of the opposition of symbolic and connectionist AI locates the origin of this dispute in a disagreement between, on the one hand, Frank Rosenblatt and, on the other, Marvin Minsky and Seymour Papert, who claimed in their book Perceptrons that neural networks could not perform logical operations such as the and/or (XOR) function (Minsky and Papert). This statement is often seen as causal for a cutback in research funding for connectionist approaches, later referred to as the ‘winter of AI’. (Pasquinelli 5). For Gary Marcus, professor of psychology and neural science, this dispute between the different approaches to AI continues to persist and is currently being played out at conferences, via Twitter and manifestos, and specifically on Noema, an online magazine of the Berggruen Institute, on which both Gary Marcus and Yann LeCun publish regularly. In an article titled AI is hitting a wall, Marcus calls for a stronger position of symbolic approaches and argues in particular for a combination of symbolic and connectionist AI (Marcus, “Deep Learning is Hitting a Wall”). For example, research by DeepMind had shown that "We may already be running into scaling limits in deep learning" and that increasing the size of models would not lead to a reduction in toxic outputs and more truthfulness (Rae et al.). Google has also done similar research (Thoppilan et al.). Marcus criticizes deep learning models for not having actual knowledge, whereas the existence of large, accessible databases of abstract, structured knowledge would be "a prerequisite to robust intelligence." (Marcus, “The Next Decade in AI”). In various essays, Gary Marcus recounts a dramaturgy of the conflict, with highlights including Geoff Hinton's 2015 comparison of symbols and aether, and calling symbolic AI "one of science's greatest mistakes " (Hinton), or the direct attack on symbol manipulation by LeCun, Bengio and Hinton in a 2016 manifesto for deep learning published in Nature (LeCun et al.). For LeCun, however, the dispute reduces to a different understanding of symbols and their localization. While symbolic approaches would locate them ‘inside the machine’, those of connectionist AI would be outside ‘in the world’. The problem of the symbolists would therefore lie in the problem of the "knowledge acquisition bottleneck", which would translate human experience into rules and facts and which could not do justice to the ambiguity of the world (Browning and LeCun). “Deep Learning is going to be able to do anything”, quotes Marcus Geoff Hinton (Hao).
The term ‘Neuro-Symbolic AI’, also called the ‘3rd wave of AI’, designates the connection of neural networks – which are supposed to be good in the computation of statistical patterns – with a symbolic representation. While Marcus is being accused of just wanting to put a symbolic architecture on top of a neural one, he points out that there would be already successful hybrids such as Go or chess – which are obviously games and not languages! – and that this connection would be far more complex as there would be several ways to do that, such as "extracting symbolic rules from neural networks, translating symbolic rules directly into neural networks, constructing intermediate systems that might allow for the transfer of information between neural networks and symbolic systems, and restructuring neural networks themselves" (Marcus, “Deep Learning Alone…”).
It’s not simply XOR
The linking of language models with databases, as shown above, is presented by Gary Marcus, MetaAI and DeepMind, among others, as a possibility to make the computational processes of the models accessible through a modified architecture. This transparency suggests at the same time the possibility of traceability, which is equated with an understanding of the processes, and promises a controllability and manageability of the programs. The duality presented in this context between uncontrollable, nontransparent and inaccessible neural deep learning architectures and open, conceivable and changeable databases or links to them, I want to argue, is fundamentally lacking in complexity. This assumes that the structure and content of databases are actually comprehensible. Databases, as informational infrastructures of encoded knowledge, must be machine-readable and are not necessarily intended for the human eye (see Nadim). Furthermore, this simplistic juxtaposition conceives of neural networks as black boxes whose ‘hidden layers’ between input and output inevitably defies access. In this way, the (doomsaying) narrative of autonomous, independent, and powerful artificial intelligence is further solidified, and the human work of design, the mostly precarious activity of labeling data sets, maintenance, and repair, is hidden from view.
Both the discourse about the better architecture and the signing of the open letter by ‘all parties’ also make clear that the representatives of connectionist AI and those of (neuro-)symbolic AI adhere to a technical solution to the problems of artificial intelligence. In either case, the world appears computable and thereby knowable and follows a colonial logic in this regard. Furthermore, the question of whether processes of learning should be simulated 'inductively' by calculating co-occurrences and patterns in large amounts of 'raw' data, or 'top-down' with the help of given rules and structures, touches at its core the 'problem' that the programs have no form of access to the world in the form of sensory impressions and emotions – a debate closely linked to the history of cybernetics and artificial intelligence (see f.e. Dreyfus). With the modeling and constant extension of the models with more data and other ontologies, the programs are built by following an ideal of human-like intelligence. In this perspective, the lack of access to the world is at the same time one of the causes of errors and hallucinations. Accordingly, the goal is to build models that speak semantically correctly and truthfully, while appearing as omniscient as possible, so that they can be easily used in various applications without relying on human correction: the models are supposed to act autonomously. Ironically, the attempt not to make mistakes reveals the artificiality of the programs.
The current hype around generative models like ChatGPT or DALL-E and the monopolization and concentration of power within a few corporations that accompanies it, has seemingly clouded the view for alternative approaches. Tsing's theory provided the occasion to look at the discourse around small, 'knowledge-grounded' language models, which - this was my initial assumption - oppose the imperative of constant scaling-up. Tsing writes that "Nonscalability theory is an analytic apparatus that helps us notice nonscalable phenomena" (Tsing 9). However, the architectures described here do not defy scalability; rather, a transversal shift occurs in that language models are scaled down and databases are scaled up at the same time. The object turned out to be more complex than the mere juxtaposition of scalable and nonscalable.
Conversational AI and generative models in particular are already an integral part of everyday processes of text and image production. The technically generated outputs produce a socially dominant understanding of reality, whose fractures and processes of negotiation are evident in the discussions about hallucinations and jailbreaking. It is therefore of great importance to follow and critically analyze both the technical (‘alternative’) architectures and affordances as well as the assumptions, interests, and power structures of the dominant (individual) actors (Musk, Altman, LeCun, etc.) and big tech corporations that are interwoven with them.
Works cited
Agre, Philip E., and David Chapman. "What Are Plans For?" Robotics and Autonomous Systems, vol. 6, no. 1, June 1990, pp. 17–34. ScienceDirect, https://doi.org/10.1016/S0921-8890(05)80026-0.
Armstrong, Kathryn. "ChatGPT: US Lawyer Admits Using AI for Case Research". BBC News, 27 May 2023. www.bbc.com, https://www.bbc.com/news/world-us-canada-65735769.
Bender, Emily M., et al. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, 2021, pp. 610–23, https://doi.org/10.1145/3442188.3445922.
Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.
Boellstorff, Tom. "Making Big Data, in Theory:. First Monday, vol. 18, no. 10, 2013. mediarep.org, https://doi.org/10.5210/fm.v18i10.4869.
Bommasani, Rishi, et al. "On the Opportunities and Risks of Foundation Models". ArXiv:2108.07258 [Cs], Aug. 2021. arXiv.org, http://arxiv.org/abs/2108.07258.
Borgeaud, Sebastian, et al. Improving Language Models by Retrieving from Trillions of Tokens. arXiv:2112.04426, arXiv, 7 Feb. 2022. arXiv.org, https://doi.org/10.48550/arXiv.2112.04426.
boyd, danah, and Kate Crawford. "Critical Questions for Big Data". Information, Communication & Society, vol. 15, no. 5, June 2012, pp. 662–79. Taylor and Francis+NEJM, https://doi.org/10.1080/1369118X.2012.678878.
Brockman, Greg [@gdb]. "ChatGPT just crossed 1 million users; it’s been 5 days since launch". Twitter, 5 December 2022, https://twitter.com/gdb/status/1599683104142430208.
Browning, Jacob, and Yann LeCun. "What AI Can Tell Us About Intelligence". Noema, 16 June 2022, https://www.noemamag.com/what-ai-can-tell-us-about-intelligence.
Burkhardt, Marcus. Digitale Datenbanken: Eine Medientheorie Im Zeitalter von Big Data. 1. Auflage, Transcript, 2015.
Cao, Sissi. "Why Sam Altman Won’t Take OpenAI Public". Observer, 7 June 2023, https://observer.com/2023/06/sam-altman-openai-chatgpt-ipo/.
Chapman, David [@Meaningness]. "AI labs should compete to build the smallest possible language models…". Twitter, 1 October 2022, https://twitter.com/Meaningness/status/1576195630891819008.
Crawford, Kate, and Vladan Joler. "Anatomy of an AI System". Virtual Creativity, vol. 9, no. 1, Dec. 2019, pp. 117–20, https://doi.org/10.1386/vcr_00008_7.
Daston, Lorraine. Rules: A Short History of What We Live By. Princeton University Press, 2022.
Dean, Jeffrey. "A Golden Decade of Deep Learning: Computing Systems & Applications". Daedalus, vol. 151, no. 2, May 2022, pp. 58–74, https://doi.org/10.1162/daed_a_01900.
Devlin, Jacob, et al. "BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding". Proceedings of NAACL-HLT 2019, 2019, pp. 4171–86, https://aclanthology.org/N19-1423.pdf.
Dinan, Emily, et al. Wizard of Wikipedia: Knowledge-Powered Conversational Agents. arXiv:1811.01241, arXiv, 21 Feb. 2019. arXiv.org, http://arxiv.org/abs/1811.01241.
dpa/lno. "Digitalisierungsminister für Nutzung von ChatGPT". Süddeutsche.de, 4 May 2023, https://www.sueddeutsche.de/politik/regierung-kiel-digitalisierungsminister-fuer-nutzung-von-chatgpt-dpa.urn-newsml-dpa-com-20090101-230504-99-561934.
Dreyfus, Hubert L. What Computers Can’t Do. Harper & Row, 1972.
Fazi, M. Beatrice. "Beyond Human: Deep Learning, Explainability and Representation". Theory, Culture & Society, vol. 38, no. 7–8, Dec. 2021, pp. 55–77. SAGE Journals, https://doi.org/10.1177/0263276420966386.
Foucault, Michel. Dispositive der Macht. Berlin: Merve, 1978.
Frankfurt, Harry G. On Bullshit. Princeton University Press, 2005.
Future of Life Institute. "About Us". Future of Life Institute, https://futureoflife.org/about-us/. Accessed 20 Apr. 2023.
Future of Life Institute. "Pause Giant AI Experiments: An Open Letter". Future of Life Institute, 22 Mar. 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
Gitelman, Lisa, and Virginia Jackson. "Introduction". Raw Data Is an Oxymoron, edited by Lisa Gitelman, The MIT Press, 2013, pp. 1–14.
Hao, Karen. "AI Pioneer Geoff Hinton: 'Deep Learning Is Going to Be Able to Do Everything'". MIT Technology Review, 3 Nov. 2020, https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/.
Hinton, Geoff. "Aetherial Symbols". AAAI Spring Symposium on Knowledge Representation and Reasoning Stanford University, CA. 2015.
Irani, Lilly. "The Cultural Work of Microwork". New Media & Society, vol. 17, no. 5, 2013, pp. 720–39. SAGE Journals, https://doi.org/10.1177/1461444813511926.
Izacard, Gautier, et al. Atlas: Few-Shot Learning with Retrieval Augmented Language Models. arXiv:2208.03299, arXiv, 16 Nov. 2022. arXiv.org, https://doi.org/10.48550/arXiv.2208.03299.
Jaton, Florian. The Constitution of Algorithms: Ground-Truthing, Programming, Formulating. The MIT Press, 2020.
Ji, Ziwei, et al. "Survey of Hallucination in Natural Language Generation". ACM Computing Surveys, vol. 55, no. 12, Dec. 2023, pp. 1–38. arXiv.org, https://doi.org/10.1145/3571730.
Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. Sage, 2014.
Kaplan, Jared, et al. Scaling Laws for Neural Language Models. arXiv:2001.08361, arXiv, 2020, https://doi.org/10.48550/arXiv.2001.08361.
Knorr Cetina, Karin. The Manufacture of Knowledge: An Essay on the Constructivist and Contextual Nature of Science. Pergamon Press, 1981.
Krizhevsky, Alex, et al. "ImageNet Classification with Deep Convolutional Neural Networks". Advances in Neural Information Processing Systems, edited by F. Pereira et al., vol. 25, Curran Associates Inc., 2012, https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf.
Latour, Bruno, and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Princeton University Press, 1979.
LeCun, Yann, et al. "Deep Learning". Nature, vol. 521, no. 7553, May 2015, pp. 436–44. DOI.org (Crossref), https://doi.org/10.1038/nature14539.
Lewis, Patrick, et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv:2005.11401, arXiv, 12 Apr. 2021. arXiv.org, https://doi.org/10.48550/arXiv.2005.11401.
Luitse, Dieuwertje, and Wiebke Denkena. "The Great Transformer: Examining the Role of Large Language Models in the Political Economy of AI". Big Data & Society, vol. 8, no. 2, 2021, pp. 1–14. SAGE Journals, https://doi.org/10.1177/20539517211047734.
MacAskill, William. What Is Longtermism?, https://www.bbc.com/future/article/20220805-what-is-longtermism-and-why-does-it-matter. Accessed 16 June 2023.
Mackenzie, Adrian. Machine Learners: Archeology of a Data Practice. The MIT Press, 2017.
Manovich, Lev. The Language of New Media. The MIT Press, 2001.
Marcus, Gary. "Deep Learning Alone Isn’t Getting Us To Human-Like AI". Noema, 11 Aug. 2022, https://www.noemamag.com/deep-learning-alone-isnt-getting-us-to-human-like-ai.
Marcus, Gary. "Deep Learning Is Hitting a Wall". Nautilus, 10 Mar. 2022, https://nautil.us/deep-learning-is-hitting-a-wall-238440/.
Marcus, Gary. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. arXiv:2002.06177, arXiv, 19 Feb. 2020. arXiv.org, https://doi.org/10.48550/arXiv.2002.06177.
Marres, Noortje, and David Stark. "Put to the Test: For a New Sociology of Testing". The British Journal of Sociology, vol. 71, no. 3, 2020, pp. 423–43. Wiley Online Library, https://doi.org/10.1111/1468-4446.12746.
Martin, Franziska. "OpenAI: Bewertung des ChatGPT-Entwicklers soll auf 30 Milliarden Dollar gestiegen sein". manager magazin, 9 Jan. 2023, https://www.manager-magazin.de/unternehmen/tech/openai-bewertung-des-chatgpt-entwicklers-soll-auf-30-milliarden-dollar-gestiegen-sein-a-6ccd7329-bcfc-445e-8b78-7b9d1851b283.
McQuillan, Dan. "ChatGPT Is a Bullshit Generator Waging Class War". Vice, 9 Feb. 2023, https://www.vice.com/en/article/akex34/chatgpt-is-a-bullshit-generator-waging-class-war.
Merchant, Brian. "Column: Afraid of AI? The Startups Selling It Want You to Be". Los Angeles Times, 31 Mar. 2023, https://www.latimes.com/business/technology/story/2023-03-31/column-afraid-of-ai-the-startups-selling-it-want-you-to-be.
Minsky, Marvin, and Seymour A. Papert. Perceptrons: An Introduction to Computational Geometry. 2. print. with corr, The MIT Press, 1972.
Nadim, Tahani. "Database". Uncertain Archives: Critical Keywords for Big Data, edited by Nanna Bonde Thylstrup et al., The MIT Press, 2021, 125–132.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.
Pasquinelli, Matteo. "Machines That Morph Logic". Glass Bead, 2017, https://www.glass-bead.org/article/machines-that-morph-logic/.
Radford, Alec, et al. "Better Language Models and Their Implications". OpenAI, 14 Feb. 2019, https://openai.com/blog/better-language-models/.
Radford, Alec, et al. "Improving Language Understanding by Generative Pre-Training". OpenAI, 2018, https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf.
Rae, Jack W., et al. Scaling Language Models: Methods, Analysis & Insights from Training Gopher. arXiv:2112.11446, arXiv, 21 Jan. 2022. arXiv.org, https://doi.org/10.48550/arXiv.2112.11446.
Rieder, Bernhard, and Yarden Skop. "The Fabrics of Machine Moderation: Studying the Technical, Normative, and Organizational Structure of Perspective API". Big Data & Society, vol. 8, no. 2, July 2021. SAGE Journals, https://doi.org/10.1177/20539517211046181.
Schick, Timo, and Hinrich Schütze. "It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners". Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, 2021, pp. 2339–52. DOI.org (Crossref), https://doi.org/10.18653/v1/2021.naacl-main.185<.
Strubell, Emma, et al. Energy and Policy Considerations for Deep Learning in NLP. arXiv:1906.02243, arXiv, 5 June 2019. arXiv.org, https://doi.org/10.48550/arXiv.1906.02243.
Sudmann, Andreas. "On the Media-Political Dimension of Artificial Intelligence: Deep Learning as a Black Box and OpenAI". Digital Culture & Society, vol. 4, no. 1, 2018, pp. 181–200, https://doi.org/10.25969/MEDIAREP/13531.
Taş, Birkan. "Vulnerability". Uncertain Archives: Critical Keywords for Big Data, edited by Nanna Bonde Thylstrup et al., The MIT Press, 2021, pp. 569–78.
Thoppilan, Romal, et al. LaMDA: Language Models for Dialog Applications. arXiv:2201.08239, arXiv, 10 Feb. 2022. arXiv.org, https://doi.org/10.48550/arXiv.2201.08239.
Tsing, Anna Lowenhaupt. "On Nonscalability: The Living World Is Not Amenable to Precision-Nested Scales". Common Knowledge, vol. 18, no. 3, Aug. 2012, pp. 505–24, https://doi.org/10.1215/0961754X-1630424.
Vaswani, Ashish, et al. "Attention Is All You Need". Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., 2017, pp. 6000–10.
Inga Luchs
AI for All?
Challenging the Democratization of Machine Learning
AI for All? Challenging the Democratization of Machine Learning
Abstract
Research in artificial intelligence (AI) is heavily shaped by big tech today. In the US context, companies such as Google and Microsoft profit from a tremendous position of power due to their control over cloud computing, large data sets and AI talent. In light of this dominance, many media researchers and activists demand open infrastructures and community-led approaches to provide alternative perspectives – however, it is exactly this discourse that companies are appropriating for their expansion strategies. In recent years, big tech has taken up the narrative of democratizing AI by open-sourcing their machine learning (ML) tools, simplifying and automating the application of AI and offering free educational ML resources. The question that remains is how an alternative approach to ML infrastructures – and to the development of ML systems – can still be possible. What are the implications of big tech’s strive for infrastructural expansion under the umbrella of ‘democratization’? And what would a true democratization of ML entail? I will trace these two questions by critically examining, first, the open-source discourse advanced by big tech, as well as, second, the discourse around the AI open-source community Hugging Face that sees AI ethics and democratization at the heart of their endeavour. Lastly, I will show how ML algorithms need to be considered beyond their instrumental notion. It is thus not enough to simply hand over the technology to the community – we need to think about how we can conceptualize a radically different approach to the creation of ML systems.
Introduction
Machine learning (ML) has grown to be a central area of artificial intelligence in the last decades. Ranging from search engine queries, over the filtering of spam e-mails and the recommendation of books and movies to the detection of credit card fraud and predictive policing, applications that are based on ML algorithms are taking over the classification tasks of our everyday life. These algorithmic operations, however, cannot be separated from the cultural sphere in which they emerge. Consequently, they are not only mirroring biases already existing in society, but are further deepening them, consolidating race, class, and gender as immutable categories (Apprich, “Introduction”).
The research and development of AI and ML algorithms is heavily shaped by big technology companies. In the United States, for example, Google, Amazon, and Microsoft wield a great deal of power over the AI industry – because it is they who have the necessary cloud computing resources and data sets, but also the unique position to draw highly qualified AI talent (Dyer-Witheford, Kjøsen and Steinhoff 43). In addition, they are increasingly offering AI or ML ‘as a service’. This includes the offer of ready-to-use AI technologies that external companies can feed into their products, and moreover open-source access to their infrastructures for the training and development of ML models (Srnicek, “The Political Economy of Artificial Intelligence”).
With respect to the mentioned issues of algorithmic discrimination (O’Neil; Eubanks), the dominance of big tech in the development of ML is crucial because who is developing AI systems is significantly shaping how AI is imagined and developed – and these spaces “tend to be extremely white, affluent, technically oriented, and male.” (West et al. 6) Countering this problem, many critical media researchers plead for a participatory approach, including more diverse communities into the creation of AI systems (Costanza-Chock; Benjamin; D’Ignazio and Klein). In her book Race after Technology, for instance, Ruha Benjamin underlines that the development of AI systems must be guided by values other than economic interests and demands “a socially conscious approach to tech development that would require prioritizing equity over efficiency, social good over market imperatives.” (Benjamin 183) Further, following Benjamin, this re-design “cannot be limited to industry, nonprofit, and government actors, but must include community-based organizations that offer a vital set of counternarratives.” (Benjamin 188) According to the authors of the book Data Feminism, Catherine D’Ignazio and Lauren F. Klein, this includes a firm stance against the forms of technological solutionism often performed by big tech. In this sense, they call to tackle problems of algorithmic discrimination not as ‘technical bias’ of the system, but rather to “address the source of the bias: structural oppression.” (D’Ignazio and Klein 63) Consequently, this perspective “leads to fundamentally different decisions about what to work on, who to work with, and when to stand up and say that a problem cannot and should not be solved by data and technology.” (ibid.)
The “Design Justice Network”, a collective consisting of designers, developers, researchers and activists, assembles these demands. Taking up Joichi Ito’s call for ‘participant design’, this network has come up with several principles that should guide technological development, focusing on the inclusion of communities currently marginalized by AI systems and favoring collaborative approaches by “shar[ing] design knowledge and tools”, in order to “work towards sustainable, community-led and -controlled outcomes” (Costanza Chock 11-12). At the same time, it is exactly this discourse that big tech companies have appropriated: they, too, aim to ‘democratize’ AI – which entails both distributing its benefits as well as its tools to everyone.
In this research essay, I will first outline the way big tech companies are utilizing the democratization discourse to their economic advantage, posing their ML infrastructures in a way that serves their expansion. Secondly, against the background of many media researchers’ call for ‘community-led practices’ in terms of AI systems, I will critically investigate the US-American AI company Hugging Face, which advertises a “community-centric approach”. Similar to the discourse around community-led AI, the company sees itself “on a journey to advance and democratize artificial intelligence through open source and open science”, decidedly opposing itself against big tech which has not had “a track record of doing the right thing for the community” (Goldman). In this regard, I aim to analyse what their notion of ‘democratization’ entails, particularly against the background of Hugging Face recently announcing its cooperation with Amazon Web Services (AWS).
While access to AI infrastructures and community-led AI development are certainly important, I will lastly show how ML algorithms need to be considered beyond their instrumental notion. It is thus not enough to simply hand over the technology to the community – we need to think about how we can conceptualize a radically different approach to the creation of ML systems. This particularly entails questioning the deeply capitalist notions along which ML and its infrastructures are currently developed, and how we might break with these values that have been nourished for decades and that are deeply intertwined with ML research, development and education.
Tools and benefits “for everyone”: Big tech’s AI democratization
In the last decade, US-based tech companies primarily known for their social media platforms, search engines or online marketplaces, increasingly centered their endeavors around artificial intelligence. In 2017, for instance, CEO Sundar Pichai reported at the yearly Google I/O conference that the company will be focusing on an “AI first approach” (Google Developers). From then on, Google has been explicitly working on the integration of ML technologies into their products, such as its search engine, its YouTube recommendation algorithm or its file hosting service Google Drive. Around the same time, the research department Google AI was established – and also other big tech companies set up, or further invested into, their own AI sections (see, for instance, IBM, Microsoft, and Meta). Next to the integration of AI into their applications, these companies have moreover started to offer their AI technologies themselves as a product, moving their companies into the heart of the AI industry (Srnicek, “Data, Compute, Labor” 242).
The corporate advances in the field of AI are accompanied by a discourse around ‘AI democratization’, which centers around the aim to make AI applications and infrastructures available to everyone. As Marcus Burkhardt details, this is targeted at users and developers:
“For developers this democratization entails the possibility to make use of AI in their own products and to partake in shaping the future of AI by having open or paid access to resources and services […]. Users on the other hand are enlisted in the democratization of AI as beneficiaries of technologies that are ‘infused’ with artificial intelligence and machine learning.” (211)
For the latter narrative, the companies closely link the advancement of AI with societal progress. Google AI’s mission, for instance, is to “create technologies that solve important problems and help people in their daily lives”, emphasizing the potential of AI to “empower people, widely benefit current and future generations, and work for the common good.” (Google AI, “Principles”) Microsoft underlines its aspiration to democratize AI “for every person and every organization”, grounded in the belief that the ‘essence’ of AI is “about helping everyone achieve more – humans and machines working together to make the world a better place.” (Microsoft News Center) And Meta, states as its goal “to build AI responsibly, for everyone” and is “advancing AI for a more connected world.” (Meta AI)
The former narrative concerns the democratization of AI development, which corresponds to the open access provision of infrastructures necessary to do machine learning (such as data sets, cloud storage and computing resources, but also frameworks and libraries). Google offers a whole section on its AI website titled “Tools for everyone.” Here, the company claims: “We’re making tools and resources available so that anyone can use technology to solve problems. Whether you’re just getting started or you’re already an expert, find the resources you need to reach your next breakthrough.” (Google AI, “Tools”) This includes access to its open-source machine learning platform TensorFlow, as well as to Google datasets, pre-trained models and other training resources. Microsoft states: “At Microsoft, we have an approach […] that seeks to democratize Artificial Intelligence (AI), to take it from the ivory towers and make it accessible for all”, which includes the availability of their “intelligent capabilities […] to every application developer in the world.” (Microsoft News Center) And Amazon Web Services deploys its cloud as means to “accelerat[e] the pace of innovation, democratiz[e] access to data, and allow[] researchers and scientists to scale, work collaboratively, and make new discoveries from which we may all benefit.” (Kratz)
Furthermore, the companies aim to lower the barrier to AI development tools “so that even non-experts inside and outside companies and universities can increasingly use the corresponding technologies.” (Sudmann 23) This entails for instance a variety of educational resources on offer, in form of free ML introductory courses and training certificates which address not only experienced developers but also those that are looking for an entry point into ML development (Luchs, Apprich and Broersma). We can also notice a growing platformization of AI development tools, which leads to automatized and standardized forms of ML development and should facilitate anyone to develop ML systems without prerequisite knowledge (see, for instance, Google’s Vertex AI).
For both users of AI applications and their developers, the democratization of AI revolves primarily around the notion of access – and increasing the availability of AI technologies does indeed facilitate their democratization in some regard: it enables a wide accessibility of ML infrastructures, and it expands – to some extent – the circle of those who can use and develop AI technologies in the first place. Nevertheless, this discourse must be viewed critically and as part of a larger historical trajectory that goes back to the beginnings of network technologies, their promises, but also their commodification. In this sense, the narrative that access to technologies serves as empowering for individuals, and that this, further, leads to more democratic societies, is by no means new. On the contrary, it has been integral part of the Silicon Valley’s “Californian Ideology” (Barbrook and Cameron) since its early beginnings – as Fred Turner for instance shows in his book From Counterculture to Cyberculture (2006), where he traces the origins of this digital utopianism.
One illustrative example are the virtual communities that began to appear in the 1980s with the advent of personal computers and bulletin board systems. For the first time, users could communicate across local barriers and in real-time, which facilitated the forming of connections in new ways (Apprich, Technotopia 90). As a result, these virtual communities were seen as a glimpse of the promise to “dissolve social hierarchies and enable a self-government of emancipated citizens” (ibid. 91). It is this faith in the liberating power of network technologies – further manifested by the Internet emerging in the 1990s – that still shapes Silicon Valley up until today. Mark Zuckerberg, founder of Facebook, for instance, “envisions a world in which individuals, communities, and nations create an ideal social order through the constant exchange of information – that is, through staying ‘connected’” (Turner, “Machine Politics”). What is even more important, however, is that the companies of the Silicon Valley see themselves in the responsibility of providing the necessary infrastructures. It thus the same narrative – the view that new technologies are facilitators of social progress – which seamlessly fits into their capitalist aims and which “proved enormously profitable across Silicon Valley. By justifying the belief that for-profit systems are the best way to improve public live, it has helped turn the expression of individual experience into raw material that can be mined, processed, and sold.” (ibid.)
We can tell a similar story when it comes to software development. As Nathaniel Tkacz shows, two movements emerged in its initial years, which displayed “two competing mutations of liberalism” (24): The Free Software Movement initiated by Richard Stallman in the 1980s, which declared that all software should be ‘free’ in terms of usage, distribution and modification – and thus non-proprietary; and the Open Source Initiative, founded by Bruce Perens and Eric Raymond in 1998 (ibid. 21-23), which accounted for a liberalism that facilitated economic growth and innovation. In his popular writing on The Cathedral and the Bazaar (2001), Raymond elaborates that the development of software should not be centrally controlled (as in his notion of the cathedral), but rather as open as possible, allowing for a high degree of individual contributions (resembling a bazaar). At the same time, companies should be able to make use of the increased productivity by commodifying the results. Raymond’s bazaar thus centers around a market for “competing ‘agendas and ideas’; progress ‘at a speed barely imaginable’; and the miraculous emergence of a ‘coherent and stable system’” (Tkacz 24, cited after Raymond).
It is this economic line of thought that also dominates the AI industry today. Big tech companies have an evident economic interest in expanding the reach of their AI technologies and infrastructures. Hence, what is advertised as democratization must above all be viewed as expansion strategy, where users are positioned as customers of corporate products. By offering their infrastructures openly accessible, companies achieve that more developers are drawn to them, which makes the infrastructures more established in AI development generally. Further, by training new developers on their infrastructures, these become dependent on their products. And – as we can see – the open-source discourse serves as means to drive ML research and to harness free contributions from the community, which, consequently, leads to further improvement of the corresponding AI technologies (Metz).
Advances under the frame of democratization can thus be understood as measures to ensure for company-owned products to become “part of the general conditions of production”, serving as “source of robust no-cost programming, a potential recruitment ground, and a strategic site for attracting users to their platforms.” (Dyer-Witheford, Kjøsen and Steinhoff 54) Or, as the authors state at another instance: “If AI becomes generally available, it will still remain under the control of these capitalist providers.” (ibid. 56)
Against the background of this corporate dominance, Pieter Verdegem underlines the importance of current AI ethics debates as outlined in the introduction, but pleads particularly for a “radical democratization of AI” which not only entails accessibility to everyone, but takes the political and economic dimensions of the AI industry into account. Facing “a situation whereby only a few organisations, whether governmental or corporate, have the economic and political power to decide what type of AI will be developed and what purposes it will serve” (Verdegem, “Introduction” 12), Verdegem demands “a digital infrastructure that is available to and provides advantages for a broad range of stakeholders in society, not just the AI behemoths.” (Verdegem, “Dismantling AI Capitalism” 8)
In the following, I will thus shift the attention to Hugging Face, an AI company that particularly centers the ‘community’ around its endeavors and analyze it against the background of these demands.
Community-centric AI: Hugging Face as alternative to big tech?
Hugging Face is a New York-based AI company founded in 2016 by Clement Delangue, Julien Chaumond and Thomas Wolf. Originally, Hugging Face started out as a chatbot app for teenagers (Dillet). After positive responses for open-sourcing the models the chatbot was built on, the company moved to become a platform provider for open-source ML technologies (Osman and Sewell). Hugging Face is funded by 26 different investors and has raised $ 160,2 million in funding at the date of May 9, 2022 (Crunchbase). More than 5.000 organizations are using its models, including companies such as Meta AI, Google AI, Intel and Microsoft (Hugging Face, Official Website). The company has also been listed in Forbes “AI 50 list” in 2022, which “recognizes standouts in privately-held North American companies making the most interesting and effective use of artificial intelligence technology.” (Popkin, Ohnsman and Cai)
On its website, Hugging Face displays itself as “the AI community building the future.” (Hugging Face, Official Website) In an interview, founder Delangue elaborates:
“Just as science has always operated by making the field open and collaborative, we believe there’s a big risk of keeping machine learning power very concentrated in the hands of a few players, especially when these players haven’t had a track record of doing the right thing for the community. By building more openly and collaboratively within the ecosystem, we can make machine learning a positive technology for everyone and work on some short-term challenges that we are seeing.” (Goldman)
As we can see, Hugging Face follows very similar narratives to those advanced by big tech companies: first, the belief of social progress advanced by AI from which everyone should benefit, and second, the need for collaboration when it comes to the development of AI systems. However, they explicitly demand to counter the present concentration of power in the AI industry. In focus of their approach thus stands the desire to open-source models previously guarded by bigger players – particularly large-language models, which are computationally intensive and not easily reproducible – in order to let everyone take part in the development of AI. As they state: “No single company, including the Tech Titans, will be able to ‘solve AI’ by themselves – the only way we’ll achieve this is by sharing knowledge and resources in a community-centric approach.” (Hugging Face, “Hugging Face Hub Documentation”)
In order to do so, Hugging Face offers an open-source library with “more than 100.000 machine learning models […], enabling others in turn to use those pretrained models for their own AI projects instead of having to build models from scratch.” (Popkin, Ohnsman and Cai) Moreover, Hugging Face is not only a model library, but – taking the developer platform GitHub as role model – acts as a platform: on the ‘Hugging Face Hub’, developers can store code and training data sets, but also “easily collaborate and build ML together” (Hugging Face, “Hugging Face Hub Documentation”).
Given their explicit focus on community-centered approaches and their explicit stance against AI monopolization, the company seems to meet the demands outlined by media researchers above. However, against the background of the company recently announcing its cooperation with Amazon Web Services (AWS), it seems that they, too, are deeply integrated into the economically driven ML ecosystem. Against the background of significant progress in the area of generative AI models (such as in text, audio or visual creation), which are generally proprietary and thus not publicly accessible, Hugging Face and AWS have declared a “long-term strategic partnership”, which is to “accelerate the availability of next-generation machine learning models by making them more accessible to the machine learning community and helping developers achieve the highest performance at the lowest cost.” (Boudier, Schmid and Simon) Specifically, this means that Hugging Face dedicates itself to AWS as main cloud provider, so that users of Hugging Face are facilitated to move between their platform and Amazon’s ML platform SageMaker, which is hosted on AWS and offers advanced cloud computing power (Bathgate). And also vice versa, customers of AWS will be provided with Hugging Face models on Amazon’s platform.
Consequently, Hugging Face, too, while taking up the banner of democratization, principally acts within an economic context. A look at their business model provides further insight in this direction: While Hugging Face does offer its core technologies open-source and cost-free, there are several additional features that come at a price and which are organized around subscriptions and consumption-based plans (Osman and Sewell). Here, Hugging Face’s paying costumers comprise mostly big corporations, “seeking expert support, additional security, autotrain features, private cloud, SaaS, and on-premise model hosting” (Osman and Sewell).
In this sense, it seems as if it becomes increasingly difficult not only to create alternative discourses around AI technologies, but also to provide sustainable alternatives that operate outside of big tech’s domain, given the challenge to reproduce the necessary infrastructures.
So what might AI democratization look like? Taking up a minor perspective
AI technologies and their platforms are not an isolated phenomenon, but can rather be regarded as another point in the genealogy of the commercialization of digital technologies by big tech companies – in this regard, also their democratization needs to be regarded critically.
As elaborated earlier on, already with the emergence of net cultures in the 1990s, there was a profound belief in the ability of technology to enhance collectivity and collaboration forwarded by the Silicon Valley (Apprich, Technotopia 45). At the same time, however, there was an emerging net critique in Europe which also believed in the potential of the new media technologies, but explicitly opposed the US-based Californian Ideology (ibid. 35). For its advocates, participation not solely meant the contribution of content to the emerging social networks, but being part of the growing project as a whole, “determining the directions, rules and enabling infrastructures of one’s own actions in a collective, participatory process.” (Stalder, “Partizipation” 221, own translation) It was then in the subsequent phase of commercialisation and the emergence of Web 2.0 that those “core concepts of the first internet generation – communication, participation, openness to new things […] – [were made] suitable for the masses”, turning ‘participation’ into “user-generated content” (ibid. 223, own translation).
Consequently, while digital media technologies were becoming generally available, “the infrastructures behind these tools [got] increasingly concentrated in the hands of a few, private corporations.” (Apprich, Technotopia 146) And even though their platformization often-times simplified their use (van Dijck, Cultures of Connectivity 6), it was the participation in their design that was closed off in favor of the streamlining and commercialization of user behavior.
With regard to the Californian Ideology, Clemens Apprich considers the instrumentalization of technology as core problematic, which hinders escaping a capitalist logic:
“The problem with this is that technology is not being recognised in its own logic, but rather seen as a means for something else – typically the liberation of the individual from the constraints of society. So, instead of acknowledging the socio-technical potential within it, technology is submitted to a communitarian thinking, which is predominantly defined by capitalist economy.” (Technotopia 144)
As we have seen, these dynamics are very similar to the discourse around AI democratization: both on the side of demands for a community-led AI as well as on the side of big tech, we can recognize not only a wish to make AI accessible for all, but also the belief that “bringing the benefits of AI to everyone” (Google AI, Official Website) will lead to social progress. And particularly in the big tech discourse, this serves economic rationales. While generally a domain reserved for technical experts, under the frame of AI democratization, machine learning is commodified into a form that is easily executable. However, it is not true participation – or democratization – that is enacted here. Rather, the notion of democratization is used as forefront for the establishment of corporate products for AI development as well as free labor via the tasks developers perform on openly accessible corporate frameworks. Moreover, similar to how platform companies today dominate how we perform search, consume content online or how interact with friends and family, so do AI technologies become gradually platformized, with big tech companies such as Google, Microsoft and Amazon competing to become the monopoly provider. At the same time, demanding the integration of community-based organizations and counternarratives to these economic rationales proves increasingly difficult given the dominance big tech has already manifested in the AI industry – materially and discursively.
What we consequently need to do is go beyond the notion of ‘access’ as sole condition for participation. At this point, we might again take as model those 1990s net cultures that Apprich compellingly describes in his search for alternative imaginaries:
“In Europe, but also in the United States and elsewhere, non-commercial Internet Providers (e.g. Backspace, Centre for Culture & Communication, De Digitale Stad, Internationale Stadt, Ljudmila, Silver Server, Public Netbase, The Thing, XS4ALL) did not only offer Internet access, but also a platform for the self-determine use of new media technologies. The idea was to position net critique at the centre of action and to open up spaces of creation and experimentation […].” (Technotopia 37)
Related to the application and development of AI, its democratization should equally mean not only the general availability of technology in the form of its material resources, but also a deeper understanding of and engagement with AI, which means challenging the existing power structures within the industry, but also confronting the inner logics of the technologies. Concepts such as scalability are deeply integrated into the practice of machine learning itself, which requires large amounts of data and high computational power; values such as universal applicability, efficiency, and simplicity dominate its everyday use (Luchs, Apprich, and Broersma); and AI infrastructures are constructed as “uniform blocks ready for further expansion” (Tsing 505), as we can see from their attempts to attract users to their platforms and to expand the reach of their products (which are already extremely difficult to escape). We need to reflect on how we can conceptualize a radically different approach to the creation of ML systems which breaks with these capitalist values that have been nourished for decades and that are deeply intertwined with ML research, development and education – but also, how we can enable a relationship with AI technologies that does not include a mere execution of corporate products, but rather a true participation in their design.
One of the key beliefs of the proponents of big tech, as Joichi Ito states, is “that the world is ‘knowable’ and computationally simulatable, and that computers will be able to process the messiness of the real world just like they have every other problem that everyone said couldn’t be solved by computers.” (4) Instead, he poses, “[w]e need to embrace the unknowability – the irreducibility – of the real world […].” (ibid. 6) One way to conceive of an alternative perspective might thus be to follow a ‘nonscalability theory’ as “alternative for conceptualizing the world” which “pays attention to the mounting pile of ruins that scalability leaves behind” (Tsing 507). For machine learning, this could mean to acknowledge the limitations that it poses – concerning the messiness of reality and the impossibility of lossless translation, but also the messiness of the ML process itself, dealing with dirty data and the political notion of discrimination (Apprich, “Introduction”; Steyerl, “A Sea of Data”).
But also in practically engaging with the technology – in learning to do machine learning and in interacting with its platforms, libraries and datasets – we need to strive for critical practices. We should oppose big tech’s tendency to hide away ML operations behind obfuscating interfaces that we are the users of and look behind them in order to gain a deeper understanding of the technical operations and to acknowledge their embeddedness in our world. In fully understanding this condition, we sooner or later need to ask: is machine learning the best possible way to do data filtering and classification – or might we rather seek for other technological means?
Acknowledgements
I would like to thank the participants of the APRJA workshop “Towards a Minor Tech”, the members of the Groningen “Data Infrastructures and Algorithmic Practices” research group and the anonymous peer reviewer for their extensive feedback on previous drafts as well as for the inspiring conversations.
Works cited
Apprich, Clemens. Technotopia. A Media Genealogy of Net Cultures. Rowman & Littlefield, 2017.
Apprich, Clemens. “Introduction.” Pattern Discrimination, edited by Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer and Hito Steyerl. meson press/Minnesota Press, 2018, pp. ix-xii.
Barbrook, Richard and Andy Cameron. “The Californian Ideology.” Mute vol. 1, no. 3, September 1995. https://www.metamute.org/editorial/articles/californian-ideology.
Bathgate, Rory. “AWS and Hugging Face Partner to ‘Democratise’ ML, AI Models.” ITPro, 22 February 2023. https://www.itpro.co.uk/cloud/370113/aws-and-hugging-face-partner-to-democratise-ml-ai-models.
Benjamin, Ruha. Race after Technology. Abolitionist Tools for the New Jim Code. Polity Press, 2019.
Boudier, Jeff, Schmid, Philipp and Julien Simon. “Hugging Face and AWS Partner to Make AI More Accessible.” Hugging Face Blog, 21 February 2023. https://huggingface.co/blog/aws-partnership.
Burkhardt, Marcus. “Mapping the Democratization of AI on GitHub. A First Approach.” The Democratization of Artificial Intelligence. Net Politics in the Era of Learning Algorithms, edited by Andreas Sudmann. transcript, 2019, pp. 209-221.
Costanza-Chock, Sasha. “Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice.” Proceedings of the Design Research Society, 2018.
Crunchbase. “Hugging Face.” 2023. https://www.crunchbase.com/organization/hugging-face.
D’Ignazio, Catherine and Lauren F. Klein. Data Feminism. The MIT Press, 2020.
Dillet, Romain. “Hugging Face Wants to Become Your Artificial BFF.” TechCrunch, 9 March 2017. https://techcrunch.com/2017/03/09/hugging-face-wants-to-become-your-artificial-bff/.
Dyer-Witheford, Nick, Kjøsen, Atle Mikkola and James Steinhoff. Inhuman Power. Artificial Intelligence and the Future of Capitalism. Pluto Press, 2019.
Eubanks, Virginia. Automating Inequality. St. Martin’s Press, 2017.
Goldman, Sharon. “Fresh Off $2B Valuation, ML Platform Hugging Face Touts ‘Open and Collaborative Approach.’” Venture Beat, 9 May 2022. https://venturebeat.com/ai/fresh-off-2b-valuation-machine-learning-platform-hugging-face-highlights-open-and-collaborative-approach/.
Google AI. Official Website. 2023. https://ai.google/.
Google AI. “Principles.” Google AI Official Website, 2023. https://ai.google/principles<.
Google AI. “Tools.” Official Website, 2023. https://ai.google/tools.
Google Developers. “Google I/O Keynote (Google I/O '17).” Uploaded on 17 May 2017. https://www.youtube.com/watch?v=Y2VF8tmLFHw.
Hugging Face. Official Website, 2023. https://huggingface.co.
Hugging Face. “Hugging Face Hub Documentation.” Official Website, 2023. https://huggingface.co/docs/hub/index.
Ito, Joichi. “Resisting Reduction: A Manifesto.” Journal of Design and Science, 2018
Kratz, Jeff. “Accelerating and democratizing research with the AWS Cloud.” AWS Public Sector Blog, 14 September 2022. https://aws.amazon.com/blogs/publicsector/accelerating-democratizing-research-aws-cloud/.
Luchs, Inga, Apprich, Clemens and Marcel Broersma. “Learning Machine Learning. On the Political Economy of AI Online Courses.” Big Data & Society, vol. 10, no. 1, 2023.
Meta AI. “About.” Official Website, 2023. https://ai.facebook.com/about/.
Metz, Cade. “Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine.” WIRED, 9 November 2015. https://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/.
Microsoft News Center. “Democratizing AI. For Every Person and Every Organization.” Microsoft Features, 26 September 2016. https://news.microsoft.com/features/democratizing-ai/.
O’Neil, Cathy. Weapons of Math Destruction. Penguin Random House, 2016.
Osman, Luqman and Dawson Sewell. “Hugging Face.” Contrary Research, 14 September 2022. https://research.contrary.com/reports/hugging-face.
Popkin, Helen A.S., Ohsman, Alan and Kenrik Cai. “The AI 50.” Forbes, 9 May 2022. https://www.forbes.com/lists/ai50/?sh=73ac1959290f.
Raymond, Eric S. The Cathedral and the Bazaar. Musings on Linux and Open Source by an Accidental Revolutionary. O’Reilly, 2001.
Srnicek, Nick. “The Political Economy of Artificial Intelligence.” Recorded talk, Great Transformation, Jena, 23.09.-27.09.2019. https://www.youtube.com/watch?v=Fmi3fq3Q3Bo.
Srnicek, Nick. “Data, Compute, Labor.” Digital Work in the Planetary Market, edited by Mark Graham and Fabian Ferrari. The MIT Press, 2022, pp. 241-262.
Stalder, Felix. “Partizipation. Von der Teilhabe zum Spektakel du zurück.” Vergessene Zukunft. Radikale Netzkulturen in Europa, edited by Clemens Apprich and Felix Stalder. transcript, 2012, pp. 219-225.
Steyerl, Hito. “A Sea of Data: Pattern Recognition and Corporate Animism (Forked Version)”. Pattern Discrimination, edited by Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer and Hito Steyerl. meson press/Minnesota Press, 2018, pp. 1-22
Sudmann, Andreas. The Democratization of Artificial Intelligence. Net Politics in the Era of Learning Algorithms. transcript, 2019.
Tkacz, Nathaniel. Wikipedia and the Politics of Openness. The University of Chicago Press, 2015.
Tsing, Anna. “On Nonscalability: The Living World is Not Amenable to Precision-Nested Scales.” Common Knowledge, vol. 18, no. 3, 2012, pp. 505-524.
Turner, Fred. From Counterculture to Cyberculture. The University of Chicago Press, 2006.
Turner, Fred. “Machine Politics. The Rise of the Internet and a New Age of Authoritarianism.” Harper’s Magazine, 2019. https://harpers.org/archive/2019/01/machine-politics-facebook-political-polarization/.
van Dijck, José. The Culture of Connectivity. A Critical History of Social Media. Oxford University Press, 2013.
Verdegem, Pieter. “Introduction: Why We Need Critical Perspectives on AI.” AI for Everyone? Critical Perspectives, edited by Pieter Verdegem. University of Westminster Press, 2021, pp. 1-18.
Verdegem, Pieter. “Dismantling AI Capitalism: The Commons As an Alternative to the Power Concentration of Big Tech.” AI & Society, 9 April 2022.
West, Sarah Myers, Whittaker, Meredith and Kate Crawford. Discriminating Systems. Gender, Race, and Power in AI. AI Now Institute, 2019.
Sandy Di Yu
Time Enclosures and the Scales of Optimisation: From Imperial Temporality to the Digital Milieu
Time Enclosures and the Scales of Optimisation: From Imperial Temporality to the Digital Milieu
Abstract
This paper looks at the cluster of phenomena that aggregates into what has been called a crisis of time, where experiences of time have become at once stretched to perpetuity and compressed to negligibility. The former results from the perceived endurance of digital media that feign everlasting memory and recall, whilst the latter is due to the speeds at which information is processed, making wait times feel intolerable. In either case, digital technologies have seemingly rendered time into something unrecognisable on a human scale.
Whilst there are several competing theories on elements that contribute to this, such literature has largely been confined to the discourse on speed, acceleration, and standardisation. What has been so far overlooked is the logic of optimisation, a mode of operation that is endemic to digitality. Optimisation, which captures aspects of digitality that exceed the scope of efficiency, is particularly insidious within the digital milieu due to the abstraction necessitated by digital processes. I analyse optimisation as it surfaces in capitalist history in the form of land privatisation and imperialism, tracing it through to the digital milieu, producing what I term “time enclosures”. This term parallels the land enclosures that were the historical preconditions of capitalism in order to articulate a specific element of privatisation and commercial value in the crisis of time. Finally, I relate optimisation to the entwined values and histories of imperialism that are premised on linearity and progress to explore the thread that corrupts our sense of time through digital technology’s effects on retention and protention.
A crisis of time
There is a phenomenon, sometimes referred to as “a crisis of time”, which is widely experienced in our current epoch, often summed up in the paradoxical phrase, “the more time we save, the less we have” (Rosa 16). The story behind the crisis is a familiar one, and its beginnings might go something like this: our technologically advanced society is overflowing with tools, both digital and mechanical, that allow us to do more in less time. Communicating with anyone at any distance is easy and uncomplicated. Automation technologies mean repetitive tasks are undertaken by machines so that the work left to humans may be creative, fulfilling, and rewarding. In light of this surge in technological advancement, it seemed for a moment that we may finally be lifted from the alienation caused by the state of labour, that we might find time for the pursuit of a good life beyond the socioeconomic confines of our contemporary moment (Srnicek 7).
Of course, that’s not how the tale unfolds. Despite the many advances in technology in the last century, the promises of automation lie unfulfilled as its claim of emancipation from mundane work is devoured by an insatiable economic need for growth (Lovink 84). Even with all the conveniences that digital technologies offer, we’re left feeling short on time, both in the cadence of the day-to-day and in the span of a lifetime in its entirety, where “life is short” remains an uncontested idiom. Equally, there is a pervasive feeling of standstill, where the experience of a perpetual present emerges from the constancy of update, a present that is not held accountable to a past and does not have a future to work towards. This paper analyses our present moment of data surplus in order to understand what is particular about digital technologies that contribute to this crisis of time.
Although time and technology have been widely studied throughout the past few decades, such literature has largely been confined to the discourse on speed, acceleration, and standardisation. This includes Stiegler’s repositioning of technics as time in the exteriorisation of memory, Harvey’s space-time compression where machines shrink our sense of distance and its relation to time, and theorists writing about the ways photographic and film technologies introduced new and asynchronous timelines (Solnit; Mroz). Processes of acceleration are often cited as the underlying cause of this crisis of time (Rosa 21), guiding technological evolution and proliferation. However, this does not account for the particularities of the digital, nor does it acknowledge the preconditions that enforce its singular directionality. What is missing from the equation, I argue, is the logic of optimisation, which manufactures a forward-thrust orientation that affects digital society at every level.
Optimisation, as it surfaces in the digital, emerges from the contested histories of progress and improvement to result in what I term a time enclosure, paralleling the land enclosures of medieval Europe and colonised terrain to mimic the same process of privatisation. Optimisation is inextricable from our socioeconomic realities just as it habituates end users of digital technologies to reconfigure collective experiences of time. I will thus explore how these historical instances of optimisation transmute into the digital and investigate whether it is possible to escape the logic of optimisation in the digital milieu.
Time in the digital milieu
The sensation of time shortage, poverty, and lack, is exaggerated under the current conditions of the digital society, where time is rendered at once negligible and infinite. Time’s purported negligibility is due to the incredible speeds at which information is processed such that waiting feels intolerable and instantaneity is expected (Crary), whilst its infinitude is due to the perceived perpetuity of digital media, premised on the supposed endurance of decentralised, unchanging informatics (Groys). This archive of knowledge is understood to be built on the mythical backbones of a system made to detect and withstand nuclear threats (Abbate), which promotes a quality of immutability that further feeds into the feeling of standstill. The ubiquity of digital technologies today, coupled with their innate logic of abstraction and automation, has resulted in previous theories on time and technics becoming inadequate in accounting for the particularities of the digital.
The intensification of these changes renders a time that is without presence and a present that is without time, lacking past and future. Time scales are stretched and squeezed to the point of disappearance, experienced and expressed in various ways that hint at a crisis of time, such as Berardi’s study of an impotence that denies us the ability to imagine alternative futures.
To contextualise these changes, I use the term “digital milieu”, as articulated by Yuk Hui in On the Existence of Digital Objects, which describes the current milieu of “multiple networks… connected together by protocols and standards” (Hui 26). The term moves from “the notion of system to the notion of the associated milieu proposed by Simondon as a response to the rampant advance of industrialization” (Ibid 221), an important distinction that captures an undercurrent of commercial value which we will see is a pretext for optimisation.
Optimisation, politically and digitally
A definition of "optimisation" must first be established before such an investigation can proceed. Optimisation is generally understood as a way to make the best use of something. This definition may initially appear benign; however, it does not hold up under scrutiny, for both the words “best” and "use" may be politically and culturally charged such that "optimisation" becomes polemical when ideas of what constitutes “best” and “use” deviates. Usefulness, as outlined in Sarah Ahmed’s What's the Use?: On the Uses of Use, is a framework that is capable of shaping phenomena. Whereas what is considered to be the “best” uncontroversially deviates depending on the value system used to judge that which is under consideration, to use is to turn something into a goal-driven tool, infusing it with a purpose (Ahmed 23) or else stripping that something of subjectivity (ibid 5). The two words are tied to one another causally, where “to use one’s faculties more is to become better at something, with betterment understood as a molding, as being shaped by function” (ibid 92). The joint directionality of the terms “best” and “use” (ibid 45) embeds a particular directional logic into optimisation, the same logic that I argue originates in the beginnings of capitalist economies and finds its current and most potent iteration in the digital.
In optimisation’s digital applications, what may appear to be a harmless way to describe processes aimed at fulfilling specific ends results in the preclusion of other frameworks through which labour, culture and history may be understood. As evident in code and software, optimisation means the qualification of code as beautiful, becoming an object of aesthetic admiration when it boasts the least number of lines of code necessary to execute a function or run a programme. As noted by Galloway: “The concept of optimization is important to algorithmic aesthetics… To optimize a system means to increase its efficiency, to eliminate redundancy, and to exploit advantages” (Galloway 324).
Optimisation, thus, shares characteristics with the concept of efficiency, where the latter is understood as achieving the most output with the least input. However, there exists a break in our moment of data surplus, within the digital milieu, where a more all-encompassing logic underlies the digital. This logic aims for longevity, hyper-synchronisation and other technical processes that include but also exceed the scope of efficiency.
The goal for digital objects within the digital milieu is thus no longer merely “more for less”, but a myriad of interrelated processes that take on the joint directional logic of optimisation’s component definition, “best” and “use”. As explained by Halpern and others, "[i]t was once the imagined limits to resources and energy that shaped industrial conceptions of efficiency, energy, and labor power. In the early twenty-first century, data capitalism changes this formula by putting the derivative before the source. Derivation takes the place of extraction, and where there was efficiency, there is now optimization" (205).
Efficiency, which takes on the logic of “least for the most”, also overlooks the prerequisites of functioning digitality; namely the abstraction necessitated by the operation of digital technologies. On a structural level, what differentiates digitality from mechanical processes is an abstraction of information into discrete units (Galloway 24). In computer science, abstraction means only relevant information from a group is derived to be used. It allows optimisation to occur in algorithmic entities to a degree that mechanical objects would not be subjected to (Kramer). When optimisation occurs on the level of code as opposed to user-facing interfaces, it is not simply the processes that become altered, but the digital object themselves, which are “objects that take shape on a screen or hide in the back end of a computer program, composed of data and metadata regulated by structures or schemas” (Hui: 1). Whilst to the user, the objects on the screen may not appear differently once the code has been optimised, their experience of the object will change. For example, an object may be loaded faster, the metadata that specifies its origins might disappear, or the file type may change such that it becomes incompatible with certain software.
To expand on how optimisation alters digital objects due to the digital’s reliance on abstraction, we might look at an example of the creation of a JPEG image file from other file types, such as RAW. To optimise such a file means to decrease its size by stripping it of certain data whilst still making it discernible and enjoyable as the image to a human viewer. The ability for a digital object made of data to be compressed is attributed to the fact that what appears to human eyes as an object on screen can be abstracted into data, understood in terms of code, and abstracted continually from computer languages that indicate how such objects should appear on screen until it reaches the level of machine language, a series of ones and zeros or hexadecimal format transmitted from one modem to another that can be recomposed into the digital object that is the desired outcome through algorithmic processes.
In order to analyse the process of optimisation in JPEG files, we must first understand how a JPEG works. JPEG functionality hinges on the human eye’s lack of discernment for certain levels of detail. We have a bias towards luminance over chrominance (light/dark versus variation in colour or frequency of the light spectrum) as well as an ability to detect details with low-frequency changes over high-frequency changes in imagery. This means a large portion of any given photographic image is redundant to human observers. To take advantage of this, JPEGs go through several algorithmic transformations to eliminate excess data, including converting the colourspace into a format that allows the removal of certain colour information and converting the image into blocks to then rid the image of high-frequency information. If it were not for the JPEG’s inherent attribute of being constructed through data, such files would not have the ability to essentially siphon off bytes. The reformatting of the digital object’s constitution, in this case, the compression of an image such as a RAW (unprocessed) file into a JPEG file, means that although the image appears the same or similar to the human eye, the object itself is fundamentally changed. The object is thus optimised through algorithmic transformation, resulting in an object that is similar in kind but intrinsically different in its configuration.
As with all that is encapsulated within the digital milieu, the optimisation of digital objects is inextricable from the formulation of commercial value. JPEGs, for example, came about through the desire for expansion by telecom corporations (Hudson). Thus, the optimisation of the digital object is often related to the user interface, which is again linked to commercial value. Optimisation of image files, for example, occurs so that webpage loading times are faster, helping it rank higher on search engines such as Google. This ever-changing set of conventions that constitute search engine optimisation (SEO) provides pages with a better chance of being seen by internet users (Killoran). Other examples of optimisation on the level of user interface include social media optimisation, which follows black-boxed rules on what posts will be “favoured” by the algorithm in a balance between maintaining and monetising users. Likewise, dating apps and selling platforms offer advice as a part of their service on how to make oneself appear more appealing to attract potential suitors or buyers (Degan), and optimising for “scannability” is now key to digital communications (Sutter). This feeds into the ethos of hustle culture and self-optimisation, a shared occasion amongst entrepreneur-influencers and outmoded slogans that tell us to rise and grind, of the habits of successful people that we’re told to aspire to, or of the bootstraps that we should be picking ourselves up by. In each of these cases, the directionality of optimisation is indicative of commercial value, whereby profit margins are expanded through the varied processes of optimisation. This can also be observed in the commodification of time in the network society, as described by Wendy Hui Kyong Chun, who explains that “value is generated online, and networks are valuable because information has become a commodity” (117). Thus, on the level of code, user interface, and networks, “best’ and “use” merge with commercial value to inform the directionality of optimisation. As Galloway says, “Ever since Marx indicted exchange value and alienation, progressive movements have looked with scepticism at the domain of abstraction and optimization” (Galloway 211).
Optimisation and industry
The digital milieu is not the first example of a paradigmatic shift in production altering our relation to time. The industrial revolution, which led to the proliferation of mechanical production, the expansion of telecommunications systems, and other technological and managerial advancements have previously led to temporal shifts that are well-documented by theorists from various disciplines. Notable theorists include economic geographer David Harvey, who analyses how the unhindered growth of capitalistic modes of production has resulted in a widespread feeling of dimensional annihilation and collapse, which he terms “space-time compression”. In his outline of the history of time in the capitalist epoch, Harvey says, ”the spread of adequate measures of time-keeping had much more to do with the growing concern for efficiency in production, exchange, commerce and administration” (423).
Efficiency, thus, is at the forefront of the modification in our relation to time during eras of mechanical production, spearheaded by both new managerial programmes and technological advancement. In contrast to the optimisation of the digital milieu, efficiency rests on value extraction by the incentivisation of more labour for less time (Braverman). Methods of more-for-less are thus a straightforward way to squeeze profit such that the goal of ever-increasing efficiency was adopted by labour authorities. Most conspicuously, Taylorism, otherwise known as scientific management, was one of the most successful programmes whose legacy in the restructuring of labour persists to present day. In the bid for increased productivity, Frederick Winslow Taylor designed meticulous experiments with labourers and machines to find the optimum output of goods. His efforts led to the increased division of labour, enforced regular work hours, and a system of new social relations which championed efficiency to the detriment of the worker, who was seen as a mere cog in the productivity machine under this system. In various ways related to labour, such as the study of “the measurement of elapsed time for each component operation of a work process” (ibid 119) and the standardisation of measuring output at the end of each workday, Taylorism’s alteration of our relationship to time is tied together with the aim of profit.
We see in these historical moments of temporal change the same directionality that imbues the logic of optimisation. “Best” and “use” in the shift of industry during the 18th and 19th centuries conflates with profit and material output on a mass scale, restructuring our understanding of time around the work day and divorcing our temporal logics from the social structures of pre-capitalistic society. Whilst efficiency is recast as optimisation once the relationship with commercial value exceeds managerial and labour processes, their joint directionality indicates an underlying logic that potentially predates the proliferation of industrial production. It is without a doubt that the industrial revolution shifted in our relation to time; however, the directional logic that is evident throughout those decades may be traced back elsewhere. With that in mind, I turn further back in time in an attempt to examine the beginnings of this forward-thrust directionality.
Progress, improvement, and time enclosures
The directional logic of “best” and “use” pervades our reality from the granular scale to the planetary, due in part to the far reaches of the digital milieu thanks to industrial infrastructure, yet its beginnings may be traced back before the proliferation of industry. In exploring this history, I aim to strike a parallel between the land enclosures that were crucial to the transition into capitalism and what I term time enclosures that are particular to the digital milieu. Although the term “optimisation” entered into popular lexicon fairly recently, its logic notably mimics the historical drive of progress, which informs the ideologies that have led to the desecration of peoples, cultures, and land. Progress, according to Azoulay, is “a destructive force, a movement, a condition embedded in temporal and spatial structures that in the course of a few hundred years has shaped the way we relate to the common world and narrate our modes of being together” (21). It “conditions the way world history is organized, archived, articulated, and represented” (11) such that even in the centuries after the initial violence of dispossession and plunder, the narrative often told is one that claims such actions to be ultimately justified.
Related to progress is the concept of "improvement", which offers a way to understand the histories tied to land and primitive accumulation of capital, as a forebear of present-day neoliberalism. Improvement is a “working towards” that denotes both motion and direction, similar to optimisation. Historically, this term comes up in documentation about land improvement, a process of privatisation that might find synonymous threads in land developments of today. Improvement is also one of the pillars of Locke’s theory of property, which has been rebuked for its justification of English settler colonialism (Arneil).
The transition into the new economic system from the largely agrarian labour force of Feudalism towards waged industrial labour involved centuries of direct and indirect violence and bloodshed in order to set the stage for what Marx termed the “historical preconditions” of capitalism. According to economic historian Michael Perelman, the classical political economists of the 17th to early 19th century “understood that market society required strong measures in order to coerce large numbers of people to join the market revolution” (Perelman). Amongst other losses, these “strong measures” resulted in the loss of land access, where communal land had to be eliminated as a way to incentivise wage labour and where if peasants and labourers had any land to their names, it was only to subsidise what meagre living they earned. As 19th-century Scottish reformer Robert Gourlay once wrote, “It is not the intention to make labourers professional gardeners or farmers! It is intended to confine them to bare convenience” (ibid).
Land improvement surfaces here in two ways: in the initial changing of wild landscape into arable land, and in the enclosure and privatisation of land. The disintegration of common land contributed to drastic changes across agriculture and industry, where “enclosure changed agricultural practices which had operated under systems of cooperation in communally administered landholdings… between 1750 and 1830 in England more than 4,000 enclosure Acts were passed. The process continued through the 19th cent. until there were hardly any open fields remaining.” (Cannon). In the years to come, enclosures at new speeds began to take place as value extraction became understood through the aspect of time management. “While enclosure was a long-standing rural practice, it began to take on a qualitatively different scale and scope. Not only did the pace of enclosure, in many parts of England, begin to accelerate, but also it was often undertaken without agreement.” (Blomley)
While enclosures are an event of centuries past, their legacy of improvement and progress remains such that we might consider enclosures as an adequate term to describe the processes that surface in relation to the crisis of time, where privatisation of time in the digital milieu to extract commercial value parallels the privatisation of space that occurs in the histories of land improvement and enclosures. Consider, for example, the attention economy, a direct transgressor in this privatisation of time, whereby every moment is a moment to be capitalised upon, from which tech and media companies aim to extract value through collecting data or showing a constant barrage of advertisements. Similarly, the gig economy is also an instance of this insidious privatisation, whereby the precarity faced by workers habituates them into necessarily offering their time to the whims of commerce at all hours and seasons. On the level of software, the transition from ownership to subscription models of usage also reinforces this idea of privatisation within the digital milieu, where your time of access is dependent on the continual payments to SaaS (Software as a Service) tools.
I term this particular conflation of temporal loss through privatisation and technological evolution time enclosures in order to evoke the historical socioeconomic modes of operation that lead to those of our current lived reality to offer a framework through which the crisis of time may be analysed. Time enclosures speak specifically to issues of property, value, and privatisation in relation to optimisation and progress, and it is within the digital milieu that such enclosures may occur, where our relations with one another are palpably more than spatial. Progress, thus, spurs towards a singular direction to first enclose land (space), before seeping into the digital, which exceeds the dimension of space and into the realm of time.
Imperial temporality
Under the dominion of progress, colonial expansion was part and parcel with the privatisation of land. Whilst most former colonies have transitioned into neocolonial or post-colonial relations with their oppressors, the legacy of Western colonialism exists to this day in less and more obvious ways. To understand how this legacy surfaces, I turn again to Ariella Aisha Azoulay, who has articulated how progress might be understood through the lens of history as a destructive force which promulgates an imperial temporality. Here, we might understand temporality not in the minute day-to-day habits and affairs of individuals, but rather temporality as the tides of history. The lasting consequences of imperial temporality include relegating certain histories to a past that has been shut away, chapters that are not to be reopened to effectively disallow certain individuals and cultures to reenter the present as dynamic and changeable (Azoulay 78). Related to digital technology and its milieu, we might be reminded of the way in which digital objects must structurally be consigned to strict categorisation in order to be called upon and used by algorithmic processes.
Imperial temporality disallows movement in any way but forward, and the events of yesterday are accepted to have been done for the sake of progress and an assumed moral objectivity. Imperial temporality is the phenomenon through which “the violent processes of impoverishing and dispossessing people… are obscured by the ideology that poverty is… an attribute of such people”, where “the violent imposition of resource monopoly is converted into the allegedly beneficent and necessary regime of law and order” (ibid 77). This temporality, thus, follows the “imperial movement of progress”, a linear motion that denies those outside of Western sovereignty the opportunity to reopen their histories, pronouncing certain cultures an event of the past that has had its final chapter.
For Azoulay, to undo imperial temporality, one must rid the bookends of colonialism as a stark beginning and end to instead focus on the operators of colonialism that persist into the present. These bookends can also be understood as time enclosures of a larger scale, similar to the time enclosures particular to the digital milieu, enclosing on histories to mutate them into objects that may be collected and categorised. And like digital time enclosures, the entanglement of commercial value, cultural memory and exploitation of labour results in this enclosure that reaches across histories. The operators, and thus the forces that maintain these colonial bookends, include cultural institutions such as museums and archives, which continue to sustain particular narratives of what belongs to history and what is a living culture (ibid 88).
The artefacts stolen or traded from their original contexts to be placed behind glass and cut off from the flow of history enclose the chapters of past cultures such that the narrative of progress by any means necessary is all that remains. It exemplifies the particular telos of progress that disallows the possibility of alternative socioeconomic landscapes or imaginaries where, after Mark Fisher, Fredric Jameson and Slavoj Žižek, it’s easier to imagine the end of the world than the end of capitalism (Fisher 2). And it’s this thread that we see surface as optimisation in the digital milieu.
Tertiary protention and the experience of optimisation
Carried forth into the current era of computational capitalism, imperial temporality continues to permeate the logic of digital technologies and media at every level, whereby the only trajectory possible is forward, however that can be achieved. Progress transforms into optimisation under the primacy of the digital so that the same logic weaving through imperialism informs how technologies evolve, where an imperial temporality both sustains and is sustained by the digital milieu. This is done so through the aim of progress, a contrived movement in the direction of a purported “best”. The digital confronts this directionality with the quality of “use” because digitality came about as a tool, built for purpose before its ubiquity enforced reliance on them. In the same instance, the digital uses its users to extract further value in the form of data and advertisement. It’s not for nothing that individuals of the digital milieu are often referred to as “users”.
Optimisation in digital technology also means the ability to retain information and anticipate future instances. The retention of information is exemplified by the internet as a source of information, a global aggregate archive that may be accessed with the right combination of hardware and software. The anticipation of the future is key to risk management and data analytics, a troubling subset of digital media that has led to socioeconomic and racial injustices (Chun 58). This is especially relevant when we consider the advances in digital technologies as the volume and quality of predictive and generative AI increase. In considering the implications of such technologies on experiences of time, I look to the concept of tertiary protention to better understand how futurity and digitality are entangled.
Protention is the anticipation of the next moment in phenomenology, coined by Edmund Husserl, in contrast to retention as the mechanism of memory. As explained by Yuk Hui, there are primary and secondary protentions, “the primary protention being the anticipation of the immediate coming moment… and the secondary protention being anticipation or expectation based on past experience” (Hui 221). Because of our reliance on technology, especially digital technologies through which our communications are mediated, Hui proposes a third type of protention.
The tertiary protention, according to Yuk Hui, refers to how “in our everyday lives, technology becomes a significant function of the imagination” (Hui 221). This is heavily influenced by and contrasted with Bernard Stiegler’s tertiary retention, a designation of technology as the exteriorisation of memory (ibid 222). In today’s society, digital technologies more than habituate their users to become the very means by which time is experienced. The passing of time through swipes and updates coupled with the hypersychronisation of networks, which ironically allows for asynchronous communications such as instant messaging, all contribute to the ways in which we relate to time. Under the logic of optimisation, it is not farfetched to propose the possibility that our collective imaginations are guided directionally towards an undetermined goal, that of “best” and “use”, as with all actions in digital processes.
The optimisation of digital technologies means imagination becomes subject to an exterior mechanism that constantly reforms for the sake of lighter digital loads, more efficient processes or immutable data structures with the pulse of commercial value surging through each. The introduction of predictive and generative machine learning leads to further complications where the effects of technology exceed the processes and build of machines towards a territory where our imaginations are entirely subject to the functionality and the corresponding outputs of these technologies, and “in terms of the logical capacities and operations of machines” (ibid 223).
Tertiary protention considers the use of data analysis for statistical predictions, activating digital objects from the purview of retention through algorithmic processes. Examples abound in today’s society, from the automated coffee machine that Hui uses to illustrate this, whereby the machine anticipates that at a certain time, you’ll want a cup of coffee (ibid 240), to the large language models that provide viable routines for those who are after specific diets or bodily results. With optimisation, digital processes mimic seamlessness so that tertiary protention is increasingly difficult to detect, thus difficult to refute. Tertiary protention in terms of scale means both the ability to recall information thanks to artificial, exteriorised retention and the immediacy of output through the incredible speeds of external processing feed into the altered experiences of time. Time is thus enclosed through the perpetuity of the present that disallows other futures and through time’s negligibility, where the next moment is always already here. Privatised, optimised, and enclosed, time in the digital milieu ceases to flow with the tempo of experience, running counter to circadian cycles and diurnal rhythms and the metronome of care beyond the scope of commodity, scaling beyond human temporalities and amplifying the crisis of time.
Minor tech and optimisation
How might the logic of optimisation be countered, and is it something that can be abandoned whilst digital technologies remain an inextricable part of our everyday lives? Whilst I cannot provide a definitive answer that will ease the effects of or else mitigate the crisis of time in the digital milieu, I want to offer a few examples of digital projects that rethink the logic of optimisation. One such project, contrary to the aesthetics of algorithms that aim for efficiency and fewer lines of code, is Winnie Soon and Geoff Cox’s Aesthetic Programming, a handbook which rethinks methods of “learning to program as a way to understand and question existing technological objects and paradigms, and to explore the potential for reprogramming wider eco-socio-technical systems”. It has the potential to mitigate the directional logic of computational thinking that habituates learners of programming, instead galvanising critical thinking in its stead. Artist Ben Grosser also provides tongue-in-cheek responses and examples of minor tech that could counter the issue of scale in the crisis of time. His project Minus is a finite social media network where users are given 100 posts for life, counter to the optimisation of other social media platforms which subsist on maximising value extraction through the most number of users. He also wrote on Twitter, “my new chat AI, called Enough, is a small language model that draws on a one-parameter pre-trained corpus—the smallest in history—and answers every question with the same response: ‘No.’”
Minor tech, thus, holds the potential to resist the uncontested trajectory of optimisation. It opts not for commercial value, to do and reach the most in the least amount of time, but to provide another pathway into the digital. These projects don't promise to reconfigure our entire relationship with the digital and its logic of optimisation, nor do they attempt to redress the enclosure of time, but what they offer, instead, are ways to re-enter the digital milieu with fresh concepts that are not built on the temporalities of old, nor its preexisting logic of progress, goal-orientation and directions. Although they act as small instances of refusal, their very presence indicates the possibility of alternative modes of being and a fissure that may be pried open in order to reclaim digitality as a method of resistance.
Works cited
Abbate, Janet. Inventing the Internet. 3rd printing, The MIT Press, 2000.
Ahmed, Sara. What’s the Use? On the Uses of Use. Duke University Press, 2019.
Arneil, Barbara. "Origins: Colonies and Statistics". Canadian Journal of Political Science, vol. 53, no. 4, Dec. 2020, pp. 735–54. DOI.org (Crossref), https://doi.org/10.1017/S000842392000116X.
Azoulay, Ariella Aïsha. Potential History: Unlearning Imperialism. Verso, 2019.
Berardi, Franco. Futurability: The Age of Impotence and the Horizon of Possibility. Verso, 2019.
Bjørn Schiermer. "Acceleration and Resonance: An Interview with Hartmut Rosa". Acta Sociologica, no. E-Special: Four Generations of Critical Theory in Acta Sociologica, 2020.
Blomley, Nicholas. "Making Private Property: Enclosure, Common Right and the Work of Hedges". Rural History, vol. 18, no. 1, Apr. 2007, pp. 1–21. DOI.org (Crossref), https://doi.org/10.1017/S0956793306001993.
Braverman, Harry. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. Monthly Review Press, 1975.
Cannon, John, and Robert Crowcroft, eds. A Dictionary of British History. Third edition, Oxford University Press, 2015.
Carter, Daniel. "Hustle and Brand: The Sociotechnical Shaping of Influence". Social Media + Society, vol. 2, no. 3, July 2016, p. 205630511666630. DOI.org (Crossref), https://doi.org/10.1177/2056305116666305.
Chun, Wendy Hui Kyong. Updating to Remain the Same: Habitual New Media. The MIT Press, 2016.
Crary, Jonathan. 24/7: Late Capitalism and the Ends of Sleep. Paperback ed, Verso, 2014.
Degen, Johanna Lisa, and Andrea Kleeberg-Niepage. 'Profiling the Self in Mobile Online Dating Apps: A Serial Picture Analysis". Human Arenas, vol. 6, no. 1, Mar. 2023, pp. 147–71. DOI.org (Crossref), https://doi.org/10.1007/s42087-021-00195-1.
Federici, Silvia. Caliban and the Witch. Second, Revised edition, Autonomedia, 2014.
Fisher, Mark. Capitalist Realism: Is There No Alternative? Zero Books, 2009.
Galloway, Alexander. Uncomputable: Play and Politics in the Long Digital Age. Verso Books, 2021.
Grosser, Ben. Minus. Social media network, Present 2021.
Groys, Boris. In the Flow. Verso, 2016.
Halpern, Orit, et al. "Surplus Data: An Introduction". Critical Inquiry, vol. 48, no. 2, Jan. 2022, pp. 197–210. DOI.org (Crossref), https://doi.org/10.1086/717320.
Harvey, David. "Between Space and Time: Reflections on the Geographical Imagination". Annals of the Association of American Geographers, vol. 80, no. 3, 1990, pp. 418–34.
Hudson, G, Léger, A, Niss, B, and Sebestyén, I, "JPEG at 25: Still Going Strong," in IEEE MultiMedia, vol. 24, no. 2, pp. 96-103, Apr-June 2017, https://doi.org/10.1109/MMUL.2017.38.
Hui, Yuk. On the Existence of Digital Objects. University of Minnesota Press, 2016.
Jackson, Brian. "How To Optimize Images for Web and Performance". Kinsta, 7 Feb. 2023, https://kinsta.com/blog/optimize-images-for-web/#optimize-images-for-web-case-study.
Killoran, John B. "How to Use Search Engine Optimization Techniques to Increase Website Visibility". IEEE Transactions on Professional Communication, vol. 56, no. 1, Mar. 2013, pp. 50–66. DOI.org (Crossref), https://doi.org/10.1109/TPC.2012.2237255.
Kramer, Jeff. "Abstraction in Computer Science & Software Engineering: A Pedagogical Perspective". System Design Frontier Journal, vol. 3, no. 12, 2006, pp. 1–9.
Lovink, Geert. Sad by Design: On Platform Nihilism. Pluto press, 2019.
Mroz, Matilda. Temporality and Film Analysis. Paperback edition, Edinburgh University Press, 2013.
Parikka, Jussi. "Ernst on Time-Critical Media: A Mini-Interview". Machinology, https://jussiparikka.net/2013/03/18/ernst-on-microtemporality-a-mini-interview/.
Perelman, Michael. "Primitive Accumulation from Feudalism to Neoliberalism". Capitalism Nature Socialism, vol. 18, no. 2, June 2007, pp. 44–61. DOI.org (Crossref), https://doi.org/10.1080/10455750701366410.
Rosa, Hartmut, et al. Social Acceleration: A New Theory of Modernity. Paperback ed, Columbia University Press, 2015.
Solnit, Rebecca. River of Shadows: Eadweard Muybridge and the Technological Wild West. Penguin, 2004.
Soon, Winnie. "Microtemporality: At the Time When Loading-in-Progress". Proceedings of the 22nd International Symposium on Electronic Art ISEA2016 Hong Kong, 2016, pp. 209–15.
Soon, Winnie, and Geoff Cox. Aesthetic Programming: A Handbook of Software Studies. Open Humanities Press, 2021.
Srnicek, Nick, and Alex Williams. Inventing the Future: Postcapitalism and a World without Work. Verso Books, 2015.
Stiegler, Bernard. Technics and Time. Stanford University Press, 1998.
Sutter, Brian. "The Most Overlooked Factor Of Content Marketing? Scannable Content". Forbes, 18 Dec. 2015.
Inte Gloerich
Towards DAOs of Difference: Reading Blockchain Through the Decolonial Thought of Sylvia Wynter
Towards DAOs of Difference: Reading Blockchain Through the Decolonial Thought of Sylvia Wynter
Abstract
With this article, I explore the connections between blockchain technology, coloniality, and decolonial practices. Drawing on Sylvia Wynter’s thought on the interdependent systems of colonialism, capitalism, and knowledge, as well as more recent work on the coloniality of digital technologies, I argue that blockchain-based systems reproduce certain dynamics at work in historical colonialism. Additionally, Wynter’s decolonial propositions provide a generative framework to understand countercultural practices with. Inspired by Wynter, Patricia de Vries explores the notion of “plot work as artistic praxis” to ask how artistic work, implicated as it is in capitalist logics, can create space for relating differently in the context of the exploitations of those dominant logics. I apply this notion to examine how Decentralised Autonomous Organisations (DAOs) in the countercultural blockchain space might contribute to this praxis.
Introduction
“Human beings are magical.” (Wynter ”The Pope must have been drunk” 35)
Throughout the ebbs and flows of its hype cycles, blockchain technology continues to spark hope for a better future in mainstream as well as countercultural communities. This is possible because, in all its complexity, blockchain works as a floating signifier that represents very different opportunities to different people (Semenzin, 2021). To understand how and when blockchain technology and culture does or does not represent a radical break away from the status quo, I place it next to Sylvia Wynter’s theories on the way the history of colonialism and the continuing coloniality of power are intertwined with capitalism and its order of knowledge. I focus in particular on two dimensions in Wynter’s examination of colonialism: the relational and the epistemological. In the first, Wynter portrays the entangled history of colonial appropriation and exploitation of nature and human life and the emergence of global capitalist relations of extraction. In the second, Wynter shows how the extractions of capitalism are supported by a colonial order of knowledge that creates exploitable less-than-human Others. After relaying essential elements of Wynter’s theory, I relate both dimensions to contemporary blockchain practices and expand existing theories on their coloniality. I then return to Wynter's thoughts on decolonial practices in the interstices of the plantation called plots. These plots, are places in which non-extractive social relations may be practiced, but they are also narratives that provide different ways to understand life and what it means to live together. I draw on the work of artists and writers, such as Sarah Friend, Ruth Catlow and Penny Rafferty, who use blockchain technology in ways that echo Wynter’s decolonial propositions. Inspired by Wynter, researcher of socially engaged artistic practices Patricia de Vries explores the notion of “plot work as artistic praxis” to ask how artistic work, implicated as it is in capitalist logics, can create space for relating differently in the context of the exploitations of those dominant logics (de Vries n.p.). I apply the notion of plot work here to examine how Decentralised Autonomous Organisations (DAOs) in the countercultural blockchain space might contribute to this praxis.[1] In what follows, I start each section with a quote by Sylvia Wynter, which I subsequently elaborate on and relate to the current blockchain space.
Historical colonialism and blockchain colonialism
The Caribbean area is the classic plantation area since many of its units were ‘planted’ with people, not in order to form societies, but to carry on plantations whose aim was to produce single crops for the market. That is to say, the plantation-societies of the Caribbean came into being as adjuncts to the market system; their peoples came into being as an adjunct to the product [...] which they produced. As Eric Williams has shown, our societies were both cause and effect of the emergence of the market economy (Wynter ”Novel and history” 95)
Wynter writes that the West’s colonisation of the Caribbean lays at the foundations of the emergence of capitalism. Western colonisers reduced the people they enslaved to labour and the nature they encountered to arable land. The places they reached were seen as nothing more than a blank slate easily capturable by a system of private ownership unfamiliar to the indigenous communities living off the land. At the same time, enslaved people were reduced to a dehumanised asset functioning as a cog in the machinery of early global capitalism. Both human and nature were integral in the process of extraction of value back to the West, but both were treated without regard for their survival except in their one-dimensional purpose as an individually replaceable resource for profit on the market in the form of labour and land. As nature and indigenous people made way for plantations, the value of harvested crops turned from something that could be eaten by the people that cultivated it – use value – to something that could be exchanged for money on the market – exchange value. To Wynter, colonial exploitation and capitalist extraction come together on the plantation: domination through marketisation, marketisation through domination (Ibid. 96-99).
Mirroring the role of historical colonialism in the establishment of early capitalism, data colonialism is the process through which data readies that which it represents for capitalist appropriation and extraction.[2] By facilitating and naturalising the production and capture of ever-newer forms of data, data colonialism is able to find corners of of life that have not yet been capitalised upon (Ibid. ”Data colonialism” 339-343). Couldry and Mejias call this the “double process of renewing colonialism and expanding capitalism” (”The cost of connection” 188). They warn against the role of data colonialism in the emergence of a new form of capitalism, one characterised by “the capitalization of life without limit” (Ibid. 3). The appropriation of nature and people that Wynter described in historical colonialism are renewed in the appropriation of “human life through extracting value from data” (Ibid. 188). By focussing on the quantification of social life and the role of this datafication in the renewal of colonialism and the expansion of capitalism, Couldry and Mejias show the devastating effects for the possibility of just social relations and self-determination (Ibid. 188-91).
Blockchain-based systems have been shown to proliferate the logics of data colonialism. They ready uncaptured territories of life for continuously expanding value extraction – a form of “digital frontierism” (Thatcher, O’Sullivan, & Mahmoudi 992) that in the early days of the technology spawned goldrush metaphors and analogies, such as the ‘mining’ of Bitcoin in the unregulated ‘Wild West’ (Maurer, Nelms, & Swartz 262; Maurer & Swartz 222). The various forms of tokenisation that take place on blockchains can turn the things they represent or contain in their metadata – votes, stakes, access rights, personal data, etc – into trade-able items that can be controlled in new ways through distributed governance structures. While this is seen by many as an opportunity to democratise, it does not necessarily have this effect. For example, blockchain technology has been forced onto vulnerable communities such as refugees who have no real choice but to give away their personal data to be stored in immutable systems in exchange for basic necessities – data which may be capitalised upon in unforeseeable ways in the future (Howson ”Climate crises” 4-5; Howson ”Crypto-giving” 814-815). Through its proposed and real use in (social) governance systems – in places often deemed underdeveloped from a Western perspective (Crandall 286-88), but also more generally, for example in blockchain-based ID systems, supply chain transparency systems, or dating apps – blockchain technology represents an “emerging cartography of control” that is always looking for a new frontier to map (Jutel 3). This often happens under the guise of lofty societal goals, such as the development of solutions against climate change that have led to projects like Nemus (“Treasure the Forest”) and Moss (“Moss Amazon NFT”) that tokenise pieces of the Amazon rainforest to be sold as NFTs. They continue the rarity economy that NFT collectibles propagated – in which special characteristics such as caves or waterfalls might increase the value of the NFT of a piece of land – and are governed from afar by stakeholders in a DAO. Just like land and labour in historical colonialism, these tokenized representations of the world are abstracted assets that promise a future stream of income that care little about the survival of the thing they represent (Juárez). Despite claims about solving climate change, the rainforests themselves only become meaningful in those DAOs if they produce monetary value for their stakeholders. These projects exemplify the way in which blockchain colonialism expands on data colonialism by introducing novel governance systems that are embedded even more intrinsically in the logics of economic exchange, making possible further alienation from the nature and life at hand.
The invention of Man and the reinvention of truth
[T]he struggle of our new millennium will be one between the ongoing imperative of securing the well-being of our present ethnoclass (i.e., Western bourgeois) conception of the human, Man, which overrepresents itself as if it were the human itself, and that of securing the well-being, and therefore the full cognitive and behavioral autonomy of the human species itself/ourselves (Wynter ”Unsettling the coloniality of being” 260)
Here, Wynter shows that the struggle for autonomy and well-being of the human in all its capacities is deeply intertwined with the power relations that have determined what is considered knowledge and truth about humanity over the past centuries. The quote above points at several important elements in Wynter’s theory: the overrepresentation of Western Man in the history of humanism, how this overrepresentation places Others outside of the human category, and how it provides a foundation for systems of domination. Wynter exposes the role of humanistic knowledge systems in the construction of an exploitable less-than-human Other. This order of knowledge takes the character of Western Man and universalises it to stand in for all of humanity, for Man, and Wynter shows that this logic still dominates societies today. To understand how this selective knowledge system emerged, Wynter looks to Renaissance humanism and its invention of Man as a secularised rational Man that is subject to the state primarily, rather than solely to the divine that dominated the Middle Ages. This newly intellectual and civilized Man was contrasted by the constructed irrational, uncivilized, savageness of the colonial Other, who as a result were not included in the category of ‘human’. However the secularisation that took place as part of the invention of Man was only partial at this point, and the process continued through the centuries. The scientific developments of the Enlightenment evolved and updated the category of Man to understand it in fundamentally biological and economic terms. Here, Man emerges out of the order of nature and the market. Newly discovered universal laws of nature offered biologically essentialised proofs for the distinctions between Man and Other and lay the groundwork for the linear and teleological understanding of evolution and eugenicist theories of race established in the 18th and 19th century. Entangled with this history is the unfolding capitalist mode of production, which brought with it eventually the figure of Homo Economicus, i.e. the rational Man in the free market. This biologically and economically essentialised version of Man persists until today. Western knowledge systems still overrepresent Western Man and universalise it, invisiblising and making unworthy of humane treatment those that do not fit this narrow mould (Ibid. ”Unsettling the coloniality of being” 260, 264, 282, 296, 317). This process of colonial power relations reproducing themselves after historical colonialism into contemporary forms of domination and exploitation in the name of capitalism is what Aníbal Quijano calls the “coloniality” of power (Quijano 171).
The interplay between coloniality and the expansion of capitalism into new domains through contemporary datafication practices is a central feature in Couldry and Mejias’ thinking on data colonialism’s “distortions of knowledge through power” (Nick Couldry & Ulises Ali Mejias ”The decolonial turn” 795). Much work has been done in recent years to uncover the many ways in which algorithmic systems produce a Western system of knowledge that actively exclude those deemed Other. Notably, Safiya Noble and Ruha Benjamin show how algorithmic systems and automation reinforce racial categories and social divisions, all while proclaiming neutrality and scientific objectivity (Noble; Benjamin), a move that mirrors directly with Wynter’s theory of the overrepresentation of Western Man through scientific means. Many more examples of the current technologised functioning of colonialist knowledge systems exist, for example, tracing the legacy of Carl Lennaeus’ categorisation of nature and humanity in the algorithms we use today (Dzodan 34-43), how these logics get “made flesh” through machine learning algorithms (Dixon-Román & Parisi 117-18), and the pseudoscientific anthropometric methods of 19th century anthropology that persist in today’s biometrics (Wevers 98).
Offering an update to Couldry and Mejias’ definitions, Catriona Gray argues that data colonialism is about “the interaction of orders of knowledge with orders of value” (Gray 10). She emphasises the way that the data about everyday life produced by contemporary platforms “do not appear simply in a pre- or non-commodified form” like nature or human life did for historical colonialism, but are produced always already in relation to economic value (Ibid. 14). Those that are recognised can participate in the system, in the market, in the processes of everyday life. Those that are not recognised, and are effectively placed outside of the human category, cannot participate. Gray’s observations are particularly important in the context of financial technology such as blockchain. The climate projects mentioned above, map onto the Amazon rainforest an order of knowledge – what is represented as rainforest, in what way is it hierarchised, and what is not represented and effectively does not exist in the system – that is at the same time an order of value – how are things mapped onto economic value and made tradable? In addition, an order of agency emerges as well: who has the capacity to act and to control that which is represented and mapped onto that order of value?
Furthermore, I argue that there is another way in which blockchain technology reproduces the logics of the order of knowledge Wynter described. Moving from medieval religious understandings of reality through to versions of reality that are increasingly based on ideological Western humanism that operate under the guise of neutrality and objectivity, the invention of Man presents itself as truth while being selective in its representations (Erasmus 50). The medieval divinely ordered world in which humans, which were thought to be sinful by nature, could redeem themselves through pious behaviour, was a truth upheld by religious authorities. The subsequent version of truth ordered the world into the rationality of civilised Man or the irrational savageness of Others. The truth that is dominant until today orders the world through biological essentialism and economic logics. The inventions of Man were in effect the inventions of truth upheld through colonial power relations (Wynter ”Unsettling the coloniality of being” 291).
Blockchains are often also thought of in relation to truth because their distributed consensus algorithms produce an immutable and publicly accessible history of events. When Ethereum made possible the distributed execution of smart contracts, applications of the technology exploded into countless new domains promising a blockchain revolution through transparency, trustlessness, and immutability (See e.g. Tapscott & Tapscott). Blockchain’s capacity to establish truth in the context of the post-truth era has led to much excitement to explore it’s applicability in diverse fields. In the process, blockchain technology came to be seen by some as a “truth machine” – which is also the title of an influential book published around this time in which blockchain is described as “a record-keeping method that brings us to a commonly accepted version of the truth that’s more reliable than any truth we’ve ever seen” (Vigna & Casey 20). Blockchains do not communicate a universal truth, they render a truth universal, just like Enlightenment humanism rendered Western Man universal. They makes rational action in the face of a complex reality possible by presenting a singular authoritative version of it. Nonetheless, in this overrepresentation, “[w]hat’s been agreed upon as the truth is the truth. There is no room for debate” (Ibid. 65, emphasis in original). Blockchains provide a computationally established working-truth-cum-universal-Truth in the face of declining trust after the financial crisis and the post-truth era, capable of facilitating exchange between individuals that don’t know each other. Blockchain technology thus reinvents truth in a post-truth context. The knowledge logics of blockchain technology performs a similar move to Wynter’s critique of humanism in overrepresenting Western Man, this time overrepresenting a market-based view on what it means to be valuable and act in accordance, invisiblizing and making unworthy of attention those things that are not deemed of value. At the same time, the works cited above on the data colonialism of blockchain systems serve as a reminder that this reinvention of the truth is subject to power relations embedded in coloniality and reproduce existing power and economic imbalances.
Data colonialism and the coloniality of data-based knowledge are affordances of blockchain technology, but it is important at this point to refrain from determinism. Use of the technology does not automatically follow colonial patterns. There are for example those that explore how blockchain’s affordances can be subverted to make space for different ways of relating in non-financial and more-than-human ways. Below, I will explore how these examples relate to Wynter’s thought towards different ways of being and being together.
Sylvia Wynter's 'plot': a place to practice different social relations
[T]he planters gave the slaves plots of land on which to grow food to feed themselves in order to maximize profits. We suggest that this plot system was [...] the focus of resistance to the market system and market values. [...] For African peasants transplanted to the plot all the structures of value that had been created by traditional societies of Africa, the land remained the Earth. [...] Around the growing of yam, of food for survival, he created on the plot a folk culture – the basis of a social order – in three hundred years. (Wynter ”Novel and history” 99)
Wynter describes plots as small, imperfect corners of relative self-determination within the larger context of colonial plantations. Plantation owners provided enslaved people with these little plots of land in order to drive costs down, to force slaves to produce their own food on hardly fertile ground that was useless to the plantation. But the plot also offered a space away from the attention of the plantation owner. A space for ways of being together that were not possible on the plantation, reinvigorating the values and traditions of African cultures in which earth and people are cared for in a spiritual and communal sense. Moving beyond historical descriptions into analogies that continue to resonate throughout the centuries, Wynter explains that if the structure of the plantation represents the institutions that order and control society, even after the abolishment of slavery, the plot is where people express and reshape their own culture. In this predicament, everyone is undeniably involved in the structures that dominate society, but participating in the plot means that there is ambiguity in that involvement and other horizons may start to appear. With the plot, Wynter shows that it is possible to create space for different social relations within larger contexts of exploitation and extraction, and possibly move beyond the incapacitating ubiquity of the dominating structures (Ibid. ”Novel and history” 96-100).
Here, I want to take De Vries’ cue to explore what “plot work as an artistic praxis” (de Vries n.p.) might mean. Just like the historical plot, artistic work is implicated in dominant institutional and capitalist logics. De Vries asks how it can learn from Wynter’s thought on the phenomenon of the plot and create space for relating outside of those logics through its own kind of plot work. Responding to De Vries’ question, my own exploration thus focusses on how blockchain – knowing that it often reproduce colonial logics – can also be engaged with in a way that constitutes a plot. Where are the bits of the blockchain space that represent culture rather than control?
While historically, plots were made available for reasons of efficiency by plantation owners, DAOs can be built by any community themselves. The idea of DAOs as countercultural DYI placemaking practices is a recurring theme in Radical Friends: Decentralised Autonomous Organisations and the Arts, a book edited by Ruth Catlow and Penny Rafferty, two prominent thinkers, artists, and organisers in the countercultural DAO field (Catlow & Rafferty). While DAO technology may be used for such DIY practices, Catlow stresses the necessity of awareness of the relationship between the technology and historical and ongoing exploitations similar to some of those Wynter lays out:
Crucial to this project is an acknowledgement of the multiple layers of devastating losses that are the result of colonial extractivist petrocapitalism upon which this webbed mechanosphere[3] is built: the mass dispossession, destruction and loss of human lives, the loss of species biodiversity and habitats and the impoverishment of futurity that is the aftermath. (Catlow ”Translocal Belonging” 177-178)
Catlow and Rafferty write that to get out of the havoc wreaked by centuries of colonial capitalism, the technology must be used to “terraform a myriad tiny worlds; and smuggle out lively and strange cultural forms into more consensual realities in the world at large” (Catlow & Rafferty ”Introduction” 40). By playful engagement with DAOs, Catlow explains that people “can sensitise themselves to the behaviours that might accompany new social relations that emerge in peer-to-peer, translocal networks” (Catlow ”To Larp a DAO” 307). Catlow and Rafferty’s thoughts on the potential of DAO’s are framed in relation to those historical and ongoing exploitative power relations and propose that we need to build new worlds, or indeed plots, in order to make different futures possible.
They refer to this capacity of DAOs to bring about new worlds as prefiguration (Catlow & Rafferty ”Introduction” 46; Catlow ”To larp a DAO” 307), a term defined as “the embodiment, within the ongoing political practice of a movement, of those forms of social relations, decision-making, culture, and human experience that are the ultimate goal” (Boggs 7). The DAO-plot they describe offers a space for this prefigurative embodiment and relating, a space to practice the cosmogonies that future generations can embody. An example of such a prefigurative, decolonial DAO might the one the Black Socialists of America are building. Deeply informed by the work of radical Black scholars and activists, they aim to support cooperative communities, mutual aid networks, and labour organisers through the non-hierarchical governance structures and collective ownership that DAOs afford. The organisation speaks of “building a new world in the shell of the old”, prefiguring a socialist plot within rampant colonial capitalism (”Our Strategy”).
Another example relates to the way that the abstractions of tokenisation invisibilise the care that is needed to sustain that which is represented on a blockchain. The logics of care and capitalism generally oppose each other (Lynch 203), and therefore, perhaps care could be a chisel for blockchain-based plot work to carve a space that offers an alternative to its surroundings. Artist Sarah Friend undermines the speculative financial alienation of many NFT projects by programming her Lifeforms NFTs in such a way that they ‘die’ if they are not cared for. In her operationalisation of care, this means that the NFT has to be given away for free to someone else, who then takes over the caring responsibilities (Friend). Lifeforms offers up a different way of relating, not only to the NFT, but also to those around you, calling on them to care for instead of capitalise on something.
A third example is the Corn Council, a DAO imagined as part of a speculative design research project (Heitlinger et al.). Central in it is the wish to undo the alienation that plantation capitalism produces. This DAO rewards “spending time with plants, [...] caring for them, kindling new care-taking relationships” (Ibid. 11). Although they are tokenised, these rewards are not exchangeable and can only be used in the community in ways that support the commons. The Corn Council creates a multi-species community in which crops are stakeholders rather than commodities (Ibid. 12). These are some budding examples of how blockchain’s plot might be thought of as places in which different social relations can take root and grow, while also always being embedded in larger systems of extraction.
Sylvia Wynter's 'plot': a different cosmogony to understand life through
[W]hat I want to uncover, to reveal, here is that which lies behind the ostensible truths of our everyday reality, but which we normally cannot see. It is that of the dynamic of what I now call the autopoiesis of being hybridly human. (Wynter in Wynter & McKittrick 27, emphasis in original)
To Wynter, ‘the plot’ is not only an analogy for a place to practice difference, but it also represents a different cosmogony to understand life through. De Vries explains: the plot is “a conceptual tool and historic reality. It is figurative language and a challenge to current spatial arrangements. It is a verb and a narrative device” (de Vries 12, emphasis in original). It is a place and a story. Exactly this irreducibility makes the term so valuable. Wynter’s history of the invention of Man shows how social ordering of life, and the real experiences that are a consequence of this ordering, are wrapped up with the ontological question of what (human) life is, and the coloniality of the powers at play in answering this question. In this process, Man constitutes the human first and foremost in biological terms, and pushes those that do not fit these terms into spaces of Otherness. However, Wynter adds, humans are always a hybrid, natural and cultural, biological beings and storytellers (Wynter ”Unsettling the coloniality of being” 295, 313-314). Reflecting on these ideas, Katherine McKittrick concisely summarises humans, in the universalised form of Man, as “storytellers who now storytellingly invent themselves as being purely biological” (McKittrick in Wynter & McKittrick 11, emphasis in original). Exactly this realisation is what offers potential for a different future. Wynter writes that as hybrid beings, we have a
uniquely auto-instituting mode of living being, we humans cannot pre-exist our cosmogonies or origin myths/stories/narratives anymore than a bee, at the purely biological level of life, can pre-exist its beehive. (Wynter ”The ceremony found” 213, emphasis in original)
In other words, living and imagining a different life need to be done at the same time. On the plot, new myths about life and sociality can be told and the related social relations practiced simultaneously; different understandings of what it means to be human and to live with (more-than-human) others can be explored, iterated on, and tested. Wynter explains that the stories humans tell have the capacity to institute new communities around new conceptions of life, to create new plots for future generations to inhabit. This is the magic that Wynter refers to in the epigraph of this article, the capacity of people to think & practice new realities into being.
Penny Rafferty thinks of DAOs as a tool for auto-institution. To her, DAOs are like magical sigils, that express intentions by making explicit what kind of world is worked towards, and get realised through repeated rituals (Rafferty 112-13). She takes this idea from Chaos Magick, a cultist subculture from the 70s that – heavily influenced by the work of postmodern theorists – argues that truth is subject to belief, and thus by changing ones beliefs through the use of sigils, reality can be changed (Otto 765). For Rafferty, DAOs are sigils that make explicit what kind of new world a community wants to establish, and through the rituals of proposals and votes actualise these new realities. Rafferty’s DAOs are a way to establish the new mythologies of the plot. For her, the new origin story starts from a reappreciation of chaos. In neoliberal capitalism, chaos appears as a dangerous element that evades control, but Rafferty instead wants to look to it as a source of irreducible life. Chaos, she writes, is an “early genesis hole, this empty yet full state [that] was once akin to a babbling spring, oozing life and creativity” (Rafferty 103).
Rafferty is not alone in her mythologising DAO practices. Some DAOs, like MolochDAO (“The Original Grant Giving DAO”) and RaidGuild (“A Decentralized Collective”), present themselves as part of fantastical stories or as if they exist in a parallel universe. These DAO mythologies reference the epic battles and mythical tales that imagine their members as self-organising collectives fighting giant villains or monsters. Although it might seem escapist, Kei Kreutler, thinker and maker in the DAO space, recognises cooperative values in DAOs like these. While their mythologies are not overtly politicised and seem to exist in a parallel fantasy universe, they reimagine social relations among their members in a very concrete way. The practicalities of organising a DAO – e.g. decisions on how to manage shared resources – offer a space to model and practice the social relations that could exist outside of capitalism even if those are not the terms used (Kreutler). The villains these DAOs fight appear to be capitalists, their extractive models, and centralised ownership.
Rafferty proposes DAOs as “an experimental practice for moving towards a different way of living together” that “could allow us to collectively set up [...] void states together, and through the act of proposal making and voting, harness intention to regulate new reality making devices” (2022 107). The mythologising DAOs allow for a new cosmogony, a new beginning out of a void state and creates an alternative to the exploitations of colonial capitalism. This void is made together with others, it is the result of bottom-up processes that resist the urge to universalise or become unalterable. Although these processes are collective, those collectives don’t have to stay cohesive: they can mutate, fork, and become plural as a result of changing priorities, beliefs, or urgencies. In this way, DAO-plots offer a new starting point from which to rethink what constitutes life in all its untokenisable dimensions. Plotting on a DAO is a process that will never be perfect, it always has to relate to an extractive outside, but can always be iterated upon to become stronger:
The creation of any DAO is a psychospiritual quest for an open-ended micro reality machine. You create this small reality machine with a number of others and let it run, fail, rebuild and evolve. (Ibid. 112)
Conclusion
I have traced the parallels between historical colonialism and blockchain colonialism according to the work of Sylvia Wynter. The concept of data colonialism offers useful starting points for the theorisation of these parallel functions in the renewal of colonial relations and the expanding of the capitalisation of life. However, I showed that the affordances of blockchain technology also call for expansions and nuances to Couldry and Mejias’ concept, particularly on the way colonial orders of knowledge and value are intertwined in the technology. I contribute a reading of colonial blockchain practices through the theory of Sylvia Wynter toward this end. However, my contribution is intended as the start of more future work toward the establishment of a comprehensive definition of blockchain colonialism in the context of a broader array of decolonial theory.
Wynter’s thought is useful in understanding the coloniality in contemporary systems, but it is also generative towards different futures. In response to De Vries, I have argued to understand the countercultural prefigurative capacities of DAOs as a form of the artistic plot work. In Wynter’s unpublished but influential manuscript titled Black Metamorphosis: New Natives in a New World, she writes that “decentralized groups” working in relation to a “framework of belief” have the capacity to “create a counter world” in which participants are involved “creatively in their destiny” (Wynter ”Black metamorphosis” 183-184). The organisational practices of these decentralised groups are what gives Wynter hope. The way in which they allow members to shape their own futures through collaboration and spiritual practices that “attain a more authentic order of being” than coloniality provides (Ibid. 184). The reality machines of DAO-based plots are a way for this decentralised work toward new mythologies and new social relations to take shape.
These plots offer room for alternative social systems, but Wynter is clear: the plantation and its exploitative market logics are strong and will endure, at least for the time being. The plot can provide a place to find “a focus of criticism against the impossible reality in which we are enmeshed” (Wynter 100). Everyone is undeniably involved in that which is critiqued, but participating in the plot means that there is ambiguity in that involvement. This is where resistance, however marginal, finds its breeding ground (Ibid. 100-01).
Notes
- ↑ In applying the decolonial lens that Wynter offers, I want to acknowledge my own positionality. My experience as a white European person influenced the examples that I chose. In this sense, these examples enjoy their own privilege as well. Although I have experienced oppressive forces – e.g. in the form of sexism in the male-dominated field of technology – I do not know the oppressive effects of coloniality from my own experience. In educating myself through, among others, the work of Wynter, I hope to do justice to its complexities and contribute to revealing its continued influence in contemporary socio-technical systems.
- ↑ They write that “[i]n deploying the concept of data colonialism, our goal is not to make loose analogies to the content or form, let alone the physical violence, of historical colonialism” (Nick Couldry & Ulises A Mejias 339). I second this nuance in my exploration of blockchain’s relation to the concept of data colonialism.
- ↑ The phrase ‘webbed mechanosphere’ is used in reference to the networked infrastructures of the web.
Works cited
“A Decentralized Collective of Mercenaries Ready to Slay Your Web3 Product Demons.” RaidGuild, n.d. https://www.raidguild.org/.
Benjamin, Ruha. Race After Technology. Abolitionist Tools for the New Jim Code. Polity Press, 2019.
Boggs, Carl. “Marxism, prefigurative communism, and the problem of workers’ control.” Radical America, vol. 11, no. 6, 1977, pp. 99-122. https://repository.library.brown.edu/studio/item/bdr:89258/.
Catlow, Ruth. “To Larp a Dao.” Radical Friends: Decentralised Autonomous Organisations and the Arts, edited by Ruth Catlow and Penny Rafferty, Torque Editions, 2022, pp. 305-313.
___. “Translocal Belonging and Cultural Cooperation After the Blockchain – a Citizen Sci-Fi.” Radical Friends: Decentralised Autonomous Organisations and the Arts, edited by Ruth Catlow and Penny Rafferty, Torque Editions, 2022, pp. 173-189.
Catlow, Ruth, and Penny Rafferty. “Introduction: What is Radical Friendship Made of?” Radical Friends: Decentralised Autonomous Organisations and the Arts, edited by Ruth Catlow and Penny Rafferty, Torque Editions, 2022, pp. 26-46.
___, editors. Radical Friends: Decentralised Autonomous Organisations and the Arts. Torque Editions, 2022.
Couldry, Nick, and Ulises A Mejias. “Data colonialism: Rethinking big data’s relation to the contemporary subject.” Television & New Media, vol. 20, no. 4, 2019, pp. 336-49. https://doi.org/10.1177/1527476418796632.
___. The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism. Stanford University Press, 2020.
___. “The decolonial turn in data and technology research: what is at stake and where is it heading.” Information, Communication & Society, 2021, pp. 1-17. https://doi.org/10.1080/1369118X.2021.1986102.
Crandall, Jillian. “Blockchains and the “Chains of Empire”: Contextualizing Blockchain, Cryptocurrency, and Neoliberalism in Puerto Rico.” Design and Culture, vol. 11, no. 3, 2019, pp. 279-300. https://doi.org/10.1080/17547075.2019.1673989.
de Vries, Patricia. Plot Work as an Artistic Praxis in Today’s Cityscapes. An Introduction to the Lectorate Art & Spatial Praxis / the City. Gerrit Rietveld Academy, 2022. https://rietveldacademie.nl/en/page/24268/an-introduction-to-the-lectorate-art-spatial-praxis-the-city.
Dixon-Román, Ezekiel, and Parisi, Luciana. “Data capitalism and the counter futures of ethics in artificial intelligence.” Communication and the Public, vol. 5, no. 3-4, 2020, pp. 116-21. https://doi.org/10.1177/2057047320972029.
Dzodan, Flavia. “Algorithms as Cartomancy.” Schemas of Uncertainty. Soothsayers and Soft AI, edited by Callum Copley and Danae Io. PUB & Sandberg Instituut, 2019, pp. 19-46.
Erasmus, Zimitri. “Sylvia Wynter’s theory of the human: Counter-, not post-humanist.” Theory, Culture & Society, vol. 37, no. 6, 2020, pp. 47-65. https://doi.org/10.1177/0263276420936333.
Friend, Sarah. Lifeforms. 2021. NFT collection. https://lifeforms.supply/.
Gray, Catriona. “More than Extraction: Rethinking Data’s Colonial Political Economy.” International Political Sociology, vol. 17, no. 2, 2023. https://doi.org/10.1093/ips/olad007.
Heitlinger, Sara et al. “Algorithmic food justice: Co-designing more-than-human blockchain futures for the food commons.” Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1-17. https://dl.acm.org/doi/10.1145/3411764.3445655.
Howson, Peter. “Climate Crises and Crypto-Colonialism: Conjuring Value on the Blockchain Frontiers of the Global South.” Frontiers in Blockchain, vol. 3, 2020. https://doi.org/10.3389/fbloc.2020.00022.
Howson, Peter. “Crypto‐giving and surveillance philanthropy: Exploring the trade‐offs in blockchain innovation for nonprofits.” Nonprofit Management and Leadership, vol. 31, no. 4, 2021, pp. 805-20. https://doi.org/10.1002/nml.21452.
Juárez, Geraldine. “The Ghostchain. (Or taking things for what they are).” Paletten, vol. Nr. 325, 2021. https://paletten.net/artiklar/the-ghostchain.
Jutel, Olivier. “Blockchain imperialism in the Pacific.” Big Data & Society, vol. 8, no. 1, 2021, pp. 1-14. https://doi.org/10.1177/2053951720985249.
Kreutler, Kei. “A prehistory of DAOs: Cooperatives, gaming guilds, and the networks to come.” gnosis guild, 21 July 2021, https://gnosisguild.mirror.xyz/t4F5rItMw4-mlpLZf5JQhElbDfQ2JRVKAzEpanyxW1Q.
Lynch, Kathleen. Care and Capitalism. John Wiley & Sons, 2021.
Maurer, Bill, Taylor C. Nelms, and Lana Swartz. ““When perhaps the real problem is money itself!”: The practical materiality of Bitcoin.” Social semiotics, vol. 23, no. 2, 2013, pp. 261-77. https://doi.org/10.1080/10350330.2013.777594.
Maurer, Bill, & Lana Swartz. “Wild, Wild West: A View From Two Californian Schoolmarms.” Moneylab Reader: An Intervention in Digital Economy, edited by Geert Lovink, Nathaniel Tkacz, and Patricia de Vries, Institute of Network Cultures, 2015, pp. 222–29.
“Moss Amazon NFT: The forest preserved in every hectare.” Moss, n.d. https://nft.moss.earth/.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.
Otto, Bernd-Christian. “The Illuminates of Thanateros and the institutionalisation of religious individualisation.” Religious Individualisation: Historical Dimensions and Comparative Perspectives, edited by Martin Fuchs, Anje Linkenbach, Martin Mulsow, Bernd-Christian Otto, Rahul Bjørn Parson, and Jörg Rüpke, De Gruyter, 2019, pp. 759-96.
“Our Strategy.” Black Socialists of America, n.d. https://blacksocialists.us/our-strategy.
Quijano, Aníbal. “Coloniality and modernity/rationality.” Cultural studies, vol. 21, no. 2-3, 2007, pp. 168-78. https://doi.org/10.1080/09502380601164353.
Rafferty, Penny. “The Reappropriation of Life and the Living – a Cosmic Battleground.” Radical Friends: Decentralised Autonomous Organisations and the Arts, edited by Ruth Catlow and Penny Rafferty, Torque Editions, 2022, pp. 102-114.
Semenzin, Silvia. Blockchain & data justice: The political culture of technology. 2021. Università degli Studi di Milano & Università degli Studi di Torino, PhD dissertation. https://iris.unito.it/handle/2318/1849682.
Tapscott, Don, & Alex Tapscott. Blockchain Revolution: How the Technology Behind Bitcoin and Other Cryptocurrencies is Changing the World. Portfolio / Penguin, 2016.
Thatcher, Jim et al. “Data colonialism through accumulation by dispossession: New metaphors for daily data.” Environment and Planning D: Society and Space, vol. 34, no. 6, 2016, pp. 990-1006. https://doi.org/10.1177/0263775816633195.
“The Original Grant Giving DAO.” MolochDAO, n.d. https://molochdao.com/.
“Treasure the Forest.” Nemus, n.d. https://nemus.earth/.
Vigna, Paul, & Michael J. Casey. The Truth Machine: The Blockchain and the Future of Everything. St. Martin’s Press, 2018.
Wevers, Rosa. “Unmasking Biometrics’ Biases: Facing gender, race, class and ability in biometric data collection.” TMG Journal for Media History, vol. 21, no. 2, 2018, pp. 89-105. https://doi.org/10.18146/2213-7653.2018.368.
Wynter, Sylvia. “Novel and history, plot and plantation.” Savacou, vol. 5, no. 1, 1971, pp. 95-102.
___. “The Pope Must Have Been Drunk, the King of the Castile a Madman: Culture as Actuality, and the Caribbean Rethinking Modernity.” The Reordering of Culture: Latin America, the Caribbean and Canada in the Hood, edited by Alvina Ruprecht and Cecilia Taiana, Carleton University Press, 1995, pp. 17-42.
___. “Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, After Man, Its Overrepresentation—An Argument.” The New Centennial Review, vol. 3, no. 3, 2003, pp. 257-337. https://www.jstor.org/stable/41949874.
___. “The Ceremony Found. Towards the Autopoetic Turn/overturn, Its Autonomy of Human Agency and Extraterritoriality of (Self-)cognition.” Black Knowledges/black Struggles. Essays in Critical Epistemology, edited by Jason R. Ambroise and Sabine Broeck, Liverpool University Press, 2015, pp. 184–252.
___. Black Metamorphosis. New Natives in a New World. Unpublished manuscript, n.d.
Wynter, Sylvia, & Katherine McKittrick. “Unparalleled Catastrophy for Our Species? Or, to Give Humanness a Different Future: Conversations.” Sylvia Wynter: On Being Human as Praxis, edited by Katherine McKittrick, Duke University Press, 2015, pp. 9–89.
Shusha Niederberger
Calling the User:
Interpellation and Narration of User Subjectivity in Mastodon and Trans*Feminist Servers
Calling the User: Interpellation and Narration of User Subjectivity in Mastodon and Trans*Feminist Servers
Abstract
In recent years, a large body of work has analyzed the cultural and social ramifications of data-driven digital environments that currently structure digital practice. However, the position of the user has scarcely been developed in this field.
In this paper I discuss how user subject positions are invoked by digital infrastructures as an alternative to big technology platforms. With subject positions I mean a shared and often unarticulated understanding of what kind of technological practice is meant when we talk about users: user as a cultural form. I start with the analysis of a crisis in user subjectivity as it manifested in the migratory waves from Twitter to Mastodon at the end of 2022, after Elon Musk bought Twitter. Like Twitter, Mastodon is a microblogging service, but it operates as a network of connected servers run by nonprofit organizations and communities. I argue that Mastodon—by way of its infrastructural organization around servers and communities—invokes a different subject position of the user than the self-contained autonomous liberal subject, one that is based on a relationship with a community. In a second case study, I discuss how the artistic activist practices of Trans*Feminist Servers create a territory to rethink relations to technology itself, most prominently through raising questions of servitude: what does it mean to serve and to be served? I argue that through this, Trans*Feminist Servers are able to reformulate use as part of relations of care and maintenance and implement them in their technological practice. As I conclude, both Mastodon and Trans*Feminist Servers project a user exceeding the neoliberal subject. While Mastodon does so by proposing a subject position related to a community first, Trans*feminist Servers go a step further and moreover open use as a practice beyond consumption, thus operate on relations to infrastructure itself.
Introduction
People are constantly involved in a process of becoming a user through technology. Today, technology usually means data-driven environments that permeate everyday life, from the personal to the professional sphere, and shape the ways we relate to each other, to ourselves, and to the world as well as how we organize on a social and political level. Data is everywhere, and large amounts of data are produced by users through interactions with platforms and cloud-based digital infrastructures. What does it mean to be a user today? How does data-driven technology profit not only from user interaction, but also produce the 'user'? How can we think through the relations of platforms and users in ways that offer different imaginations, and thus open up a space to act?
This article is interested in the user as a cultural form, a mostly implicit and unarticulated shared understanding of what kind of technological practice is meant when we talk about users. This is not a psychological perspective focused on the inner life of an individual, neither it is an anthropological view of a group of living persons in their specific cultural contexts. The user as a cultural form is concerned with subjectivity, but as shared imaginations. Subjectivity itself is individual, the temporal situation of a person through which individuals makes sense of the world. It is a continuous process of becoming particular in relation to the complexities of the world. But as philosopher Olga Goriunova highlights, subjectivity is always developed in relation to shared imaginations about what it means to be in the world, e.g., as a woman, an adult, or — in our case — a user. These shared cultural imaginations are called 'subject positions' (Goriunova, “Uploading Our Libraries”). They are role models or figurations and provide a position in the world from which to make sense. As shared imaginations, subject positions are articulated and developed in the cultural domain. Furthermore, they are also aesthetic positions in the sense that they formulate a position from where practice is possible, as Goriunova insists. Thus subject positions are shaped by practice and the communities around them. Goriunova has exemplified this for very specific practices at the intersection between commons and digital activist/artistic practices (Goriunova, “Uploading Our Libraries”), but the principle of linking practice and subjectivity also applies to the more general field of everyday use.
Despite their central position in data, users are considered only at the margins of the current critical discourses about the implications of data-driven environments. In the field of Critical Data Studies, a substantial body of work emerged about the cultural and political ramifications of data-driven environments (Boyd and Crawford; Iliadis and Russo). It raises important questions about flaws and bias in data (Eubanks), how data-driven systems enhance inequality (O’Neil), extend colonial modes of exploitation and thingification (Couldry and Mejias), and install new forms of discrimination (Benjamin). However, the position of the user remains underdeveloped in this field and is primarily discussed in terms of abuse and exploitation.
But big data is not only a new way of organizing and operationalizing knowledge obtained from users, but constitutes a new mode of signification. As law philosopher Antoinette Rouvroy explains, data produces meaning out of itself, and not about the world. The data about a user’s browsing history does not mean her journey surfing the web, but is taken as an indicator of personality, age, gender, interests, economic situation, and many more, often secret categories. The recorded traces users leave thus take on a life of their own. This is a process of signification that is not indexical. Thus data does not operate through representation or causality, but by probability and statistics. Goriunova suggests the term 'distance' to describe this nonindexical relation between people and data (Goriunova, “The Digital Subject”). It is through distance that big data produces new modes of governmentality and as well as new subjects, with far-reaching consequences, e.g., for the legal domain (Rouvroy).
How users make sense of this distance is investigated in another emerging field I call 'User Studies'. It is a body of work in anthropology that addresses sense-making processes about algorithms and platforms (Siles et al.; Bucher; Rader, and Gray; Devendorf and Goodman). These studies articulate technology not as essentialist independent artefacts, but as something that is created through shared praxis, as culture (Seaver). They are an important contribution to the understanding of the position users have in the contemporary data-driven digital world. However, through their focus on users as individuals and on bottom-up sense-making processes, they are only marginally concerned with the subjectivity of users, discussing it under the term of identity (Karizat et al.). They often fail to address the political dimensions as articulated in Critical Data Studies and do not consider the cultural forms of subject positions.
Subjectivity is linked not only to technology, but also to the broader sociocultural environment. This has been a recurrent topic in Cultural Studies (Hall). Here, the term 'subjectivity' has a meaning similar to 'subject positions', as explained above. Especially in feminist scholarship, there is an ongoing debate about how subjectivity is shaped by neoliberal formations (Banet-Weiser) and how it responds to critical perspectives, incorporating them into new narratives about femininity as self-empowered and independent, however problematic and conflicting they may be (Gill and Kanay). This body of work highlights the role of narratives mobilizing values, which circulate in a culture deeply shaped by capitalist dynamics. However, it is not directly concerned with users and big data technologies, but provides a backdrop of the manifold ways culture and institutions are involved in the creation, maintenance, and transformation of widely shared basic forms of subjectivity that the subject position of the user inherit.
The user as a distinct part of the cultural history of technology is only rarely specifically discussed. Notable examples are Olia Lialina, who mapped conceptualizations of the user in the historical discourse in HCI (human-computer interface) (Lialina), and Joanne McNeil, who traced a cultural history of the Internet from the perspective of users themselves, highlighting the diversity of experiences and cultural differences that manifest in and through technology (McNeil).
The shared imaginations of user subject positions as a specific position in technological practice is deeply political, because it is not only a bottom-up sense-making process as investigated by Users Studies, but claims subjectivity as precisely that place where the power relations in technology, as analyzed in Critical Data Studies, are inscribed in the self-understanding of users, thus reproducing them. As already explained, this analysis takes subjectivity—and in extension subject positions—as a place of being affected, but also as a place of claiming agency. This analysis follows Louis Althusser’s concept of interpellation (Althusser et al.), draws on performative concepts of identity (Butler), and extends a line of thinking that considers how subjectivities are both expressed in and shaped by mass media (Silverman and Atkinson).
In this paper I will bring these strands of thinking together through an analysis of two case studies. The first is an analysis of a contemporary event: the wave of migration from Twitter to Mastodon following the acquisition of the former by Elon Musk. I argue that some of the difficulties of switching to Mastodon can be analyzed as a crisis in the subject position of the user, and I will discuss the role of infrastructural organization in this crisis.
Because subject positions live and are transformed in the cultural field, cultural and artistic practice provide a privileged position of developing methods and practices of doing otherwise. In the second case study I discuss Trans*Feminist Servers as an artistic-activist strategy on the terrain of cultural imagination of technology itself. Trans*Feminist Servers aim at developing other subjectivities and fostering different practices of being a user, both as a conceptual tool and as lived technological practice. This allows reclaiming user practice as a place for careful relationships not only with a community (as in the first case study of Mastodon’s interpellation of user subjectivity), but also with technology itself.
The Twitter crisis
When Elon Musk bought Twitter at the end of October 2022, people started discussing alternatives. One of them was Mastodon—like Twitter, a micro-blogging service. Unlike Twitter, Mastodon is not corporate-owned. It is a network of connected servers that are often run by small collectives and nonprofit organizations. Following the acquisition of Twitter by Musk and during every wave of policy change that followed, the Mastodon network showed waves of new registrations. During little more than three months, the Mastodon network grew from 4.5 to 9 million users and, more significantly, from 3,700 to 17,000 servers (according to the User Count Bot for all known Mastodon instances @mastodonusercount@mastodon.social). For comparison purposes: Twitter has 368 million users (Iqbal), so even with the steady growth of Mastodon’s user count, changing from Twitter to Mastodon is a movement through technological scale, with many consequences (because platforms thrive on network effects: the more numerous their users, the more valuable the platform is for everybody [Srnicek 45]). But on the part of the users, this was often experienced as a crisis in subjectivity:
It is important to understand that this is not only a personal crisis. When my friend articulates here that he is not a nerd and hence Mastodon is not for him, it is not only about him. It also is about the subject position of the user being different than that of the nerd.
The return of the server: infrastructure and subjectivity
Both the user and the nerd are subject positions of technological practice. One aspect in said crisis of user subjectivity is what I call 'the return of the server'. Even if scale is an important aspect for user experience, the difference between Twitter and Mastodon is not only one of numerical scale in terms of user count, but first and foremost one of organization on an infrastructural level. Twitter operates as a centralized platform; it is a unified service accessed through an app, and its data and processes are located in the cloud. Mastodon, however, runs on a decentral network of federated[1] servers connected by a shared protocol.
Of course, technically speaking big technologies and the cloud also operate on servers. Servers are still the main nodes in the infrastructure of the Internet: it is on servers that data is stored and where user requests are processed. But on big tech platforms, servers have been abstracted away in order to make technical systems scalable (Monroe). Servers have disappeared from the view of users due to this recent additional step in the chain of abstractions on which digital infrastructure is built. And with it, a contextual and materialist understanding of digital infrastructure disappeared as well. Specific machines, local contexts, and a diversity of practices turned into immaterial services and apps. Servers have been replaced with the cloud, a metaphor suggesting quite the opposite of the massive, energy-hungry data centers powering large scale digital infrastructure. Thus, in the age of cloud computing, we simply cannot know the number of servers Twitter is running on.
The return of the server happens very prominently at the first step of the signup process for Mastodon. Here, Mastodon asks users to pick a server and hence a specific context to join. In order to answer this, users need to identify themselves in ways that are different than on big technology platforms. When signing up to a commercial platform, users are asked to identify themselves as a classical autonomous (self-contained) liberal individual. In contrast, the sign-up process for Mastodon asks users to choose a server, which means identifying themselves in relation to a community first.
In the 1960s, Marxist philosopher Louis Althusser explained that the social and political order of the world are continuously updated in individuals by means of a process he called 'interpellation'. In his view, the subject does not exist independently of its surroundings, but is created and sustained (hailed) through calls of institutions (Althusser et al.), and in the context of this text: infrastructures. It is through their infrastructural organization that Twitter and Mastodon interpellate their users, and, as we have seen, this interpellation brings forward different imaginations of what a user is. This means that subjectivity is never only personal, or interior, but that the personal, the psychological, and the individual are deeply linked to the world and its social, economic, political, and cultural formations. My friend’s interpretation of the sign-up process for Mastodon as nerdy points to an understanding of servers being outside of the domain of users and—as technological artifacts—belonging to the nerd. But it also points to something deeper: as the sign-up process of Twitter indicates, contemporary user subjectivity is closely aligned with liberal subjectivity. This autonomous, calculating and self-regulating subject is a subject position in itself, serving as a background of user subjectivity. Hence, the process of infrastructural interpellation is not a deterministic process, but operates in relation to other callings, self-understandings, and already established subject positions. Infrastructural interpellation can be confirming existing normative subject positions, but as we have seen with Mastodon, it can also result in tensions. These tensions articulate not only a problem, but also a space for difference. Thus, interpellation through technology is a performative process that consists of numerous performative gestures that maintain identity, but also bear the possibilities of difference (Butler). This means that subjectivity is a place of being affected by the world, but also a place where change can happen.
Being a user between individual and community
As I have discussed, the request to choose a community at the beginning of an identification process creates tension between the conventions of the liberal subject (where communities always come after the subject) and the specific affordances of federation as infrastructural organization, which centers the communities around servers.
This tension sparked a long debate in the Mastodon community about the difficulties newcomers experience with the sign-up process. At this point, a list of servers to join was provided on https://joinmastodon.org (the privileged information site for joining Mastodon). But due to the quick expansion of the Mastodon network, the list quickly grew into a cluttered, overwhelming list of servers that no one was able to seriously consider for orientation.
In order to make it easier for people willing to join, the first move was to solve the problem by meeting the expectations of users (and with copying it the conventions of corporate platforms), and giving up the list in favor of promoting only one server: mastodon.social. Mastodon.social is one of the biggest instances (servers) operated by Mastodon GmbH, a nonprofit organization run by Eugene Rochko that is registered in Berlin (Eugene Rochko is the developer of Mastodon, but not the owner[2]).
This earned sweeping critique from the community, which highlighted the dangers of centralization for the whole ecosystem and insisted on the nature of federation being exactly about community-centered infrastructure. Eventually, this was resolved by again putting up an overview of servers, but this time with the ability to filter it by regions and topics, language, and other types of differences. This solution is a strategy to remain loyal to federation- and community-based infrastructures by making the wealth of communities legible in order to facilitate choice.
On the official mobile app (named Mastodon and also maintained by Mastodon GmbH), however, new users are still presented with mastodon.social as the default server. In order to choose another server, users are taken to the list on https://join.mastodon.org, which is a website outside the app. Thus, joining servers other than mastodon.social is discouraged by a complicated process that is difficult for newcomers to navigate. This difference in sign-up procedures on the web and in the app mirrors the tension of how users are conceptualized through technology: as a member of a community around federated servers versus a self-contained liberal individual of a service.
To conclude this analysis: Mastodon suggests a different user subject position than corporate big technology platforms: one oriented towards a community, and not an atomic, isolated self-contained individual. This interpellation comes from the technical principle of the federation of independent servers. The difference in interpellation leads to tensions both on the part of users as a crisis in subjectivity, as well as on the part of the platform handling its onboarding process. But while opening the user subject position towards communality, Mastodon still upholds the difference between users and those involved with providing the infrastructure: the administrators, the programmers, and the moderators. Thus, the user subject position offered by Mastodon is still a consumer, clearly separate from that of the producer and the provider of the service, as with big tech platforms.[3]
Trans*Feminist Servers as protagonist
Since subject position are cultural forms, cultural and artistic practice in particular make for a privileged position for developing methods and practices of doing otherwise.
One example of alternative thinking through how subject positions are invoked by means of technology is formulated in A Wishlist for Trans*Feminist Servers. This is an updated version of an older text, The Feminist Server Manifesto (Constant). Both of them were written by a “community of people interested in digital discomfort,” as the Wishlist puts it. Both the Manifesto and the Wishlist[4] choose the server as their protagonist, in the form of a self-articulation. A protagonist is what Goriunova calls a “figure of thought” that offers a “position from which a territory can be mapped and creatively produced” (Goriunova, “Uploading Our Libraries”). By means of this self-articulation, the Trans*Feminist Servers produce different imaginations of technology that include the role of the user.
At the center of this articulation are questions of servitude. “Are you being served?” was the title of a workshop that took place in Brussels in 2014. During a three-day event at Constant, an artist-run space in Brussels (About Constant), artists and practitioners met to discuss concepts and exchange alternative practices involving servers along the questions of who is being served, by whom, and what the conditions of services are (Hofmüller et al.). Introducing the question of servitude allows for a discussion of relationships to and through technology. This involves the subject positions they invoke. Users of platforms are encouraged to believe to be at the receiving end of servitude through a discourse about use-fulness and use-ability, but services are provided under very specific conditions marked by privilege. The chances of being served are not equally distributed, and vulnerable communities often find that they, their content, and their communication are not protected by platforms (to be clear, this includes Mastodon, which is notoriously white and has been proven to be hostile towards people of color in far too many cases). Servitude is a very specific relation between users and technology. It includes the strong distinction between users and the contexts of running services, including the materialites of infrastructures and all of the practices that are needed to make a service work. Servitude is deeply marked by abstraction from specific contexts, with uncomfortable links to slavery as the most radical abstraction, or thingification. This link is still present in technological terminology of master and slave relationship, or less explicitly, in talking about clients and servers. Trans*Feminist Servers try to open up these relationships towards other, more careful ones while keeping in mind the “swamp of interdependencies they are with” (A Wishlist for Trans*Feminist Servers).
Feminist*Servers exist as communities and real infrastructures (List of Feminist Servers) out of a real need to create safer spaces online for vulnerable communities (spideralex). Thus, Trans*Feminist Servers are both a thinking tool and communal infrastructures (Snelting and spideralex), which means that their work is both narrative work and lived technological practice. This is radical in the sense that it re-articulates the whole territory—both conceptually, with the protagonist of the server, as well as practically, in that it operates technology as a community project.
I have argued this to be an active refusal of the master voice of the infrastructure of functionality and abstraction. This refusal opens up technological practice into a space to be inhabited (Niederberger). And as both texts insist, Trans*Feminist Servers exist only because they are cared for by a community, as the need of having them is expressed in acts of creating them. Instead of abstraction, the territory offered by Trans*Feminist Servers is therefore structured by affection. This foregrounds practices of care: administration, maintenance, moderation (meaning the entire scope of making a community work), documentation, fund raising, and last but not least also using the services, which comes with the responsibility of monitoring and providing feedback on functionality. The wiki of Anarchaserver (one of the many Trans*Feminist Servers) refers to the roles included in Trans*Feminist Server practice as “guardians, fire extinguisher, interfaces and scribes” (anarchaserver). It is interesting to note how these roles point towards specific needs, dependencies, and meaningful relations—that is, embodied contexts.
Hence, being part of a Trans*Feminist Server means participating in an ongoing negotiation of the conditions for serving and service. Here, use is not an act of consumption, but one of creation and re-creation that includes the whole territory of relationships with a community and—importantly—with infrastructure itself.
Conclusion
In the aftermath of Elon Musk’s acquisition of Twitter, many users considered Mastodon as an alternative. Whereas it is a microblogging service like Twitter, it is not corporate-owned but is a network of connected servers, often operated by communities and nonprofit organizations. However, the change from Twitter to Mastodon proved difficult for many users. I analyze this as a crisis in the user subject position provoked by what I call 'the return of the server'. As tangible infrastructures, servers have been abstracted away from the user perspective due to a further step in the abstraction of digital technology, the cloud, where users deal with seamless fluid processes, dynamic availability, and decontextualized services. Bringing back servers as a central element in signing up to a service asks users to identify themselves not as autonomous individuals, but with respect to a community. This is very different than the consumer choices of big data platforms. To be a user is therefore not self-evident, but deeply shaped by the infrastructural organization of technology, a process Althusser called interpellation. This process also constitutes the subject position of the user as a shared imagination, against which individual subjectivity can be developed. Subjectivity therefore can be seen as a link between the personal and the structural, the individual and the shared, and thus it is a place of being affected but still a place for agency.
I discussed Trans*Feminist Servers as an example of opening the territory for a relation not only to a community, but also to technology and infrastructure itself. Trans*Feminist Servers are both narratives and situated technological practice, and through this they are able to re-articulate a territory of technological relations as a whole. They do so by using the server as a protagonist who offers a discussion and a terrain for practice, being both narrative work and lived technological practice. As part of their narrative work, they raise questions of servitude: what does it mean to be served and to serve? Thus, Trans*Feminist Servers formulate different relations, informed by care and maintenance and not by abstraction. This also raises new possibilities for user subject positions: to be a user of a Trans*Feminist Server means being part of an ongoing negotiation about the conditions of services and serving as a part of a community, but also as a part of technological practice on the level infrastructure itself.
Both Mastodon and Trans*Feminist Servers challenge the conventional consumer subject position of the user, who is a self-regulating autonomous liberal individual. Mastodon does this by suggesting the identification of a user being in relation to a community as an initial step in the sign-up process. Becoming a user on Mastodon therefore means becoming a member of a community first. Trans*Feminist Servers are community-run infrastructures and thus require being associated with a community as well. However, in a second step they offer also different relations to infrastructure itself in that they radically question relations of servitude and replace them with relations of care and maintenance. This opens up an ecology of practices, transforming use into a contribution far beyond consumption. Being a user on a Trans*Feminist Server thus means being part of the re-creation and maintenance both of the community and the infrastructure.
Notes
- ↑ “Federation is a concept derived from political theory in which the various actors that constitute a network decide to cooperate collectively. Power and responsibility are distributed as they do so. In the context of social media, federated networks exist as different communities on different servers that can interoperate with each other, rather than existing as a single software or single platform.” (Mansoux and Abbing 125)
- ↑ Mastodon is only one piece in a larger set of applications that exchange posts and contents through a shared protocol (ActivityPub), which includes not only the microblogging service of Mastodon (and its forks), but also, among others, Peertube, a video sharing platform, and Pixelfeed, an image-based platform not unlike Instagram. This larger ecosystem of interconnected services is called the “Fediverse.” In the Fediverse you can have content coming from different sources mixed into one feed, which is very different from the gated environments of big tech platforms.
- ↑ Of course, another important difference between Mastodon and big tech platforms is the role data plays in them, and this difference adds more complexity to the question of user subject position. Yet this discussion is beyond the scope of this text.
- ↑ For reasons of readability, I will use “Trans*Feminist Servers” to refer to issues addressed in both texts.
Works cited
About Constant. https://constantvzw.org/site/-About-Constant-7-.html?lang=en. Accessed 19 June 2023.
A Wishlist for Trans*Feminist Servers. 2022, https://etherpad.mur.at/p/tfs.
Althusser, Louis, et al. On The Reproduction Of Capitalism: Ideology And Ideological State Apparatuses. Verso, 2014.
anarchaserver. Be a Guardian, a Fire Extinguisher, a Scriba, an Interface—Anarchaserver. 2022, https://alexandria.anarchaserver.org/index.php/Be_a_guardian,_a_fire_extinguisher,_a_scriba,_an_interface. Accessed 19 June 2023.
Banet-Weiser, Sarah. Authentic TM: The Politics of Ambivalence in a Brand Culture. NYU Press, 2012.
Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity Press, 2019.
Boyd, Danah, and Kate Crawford. “CRITICAL QUESTIONS FOR BIG DATA: Provocations for a Cultural, Technological, and Scholarly Phenomenon.” Information, Communication & Society, vol. 15, no. 5, 2012, pp. 662–79, https://doi.org/10.1080/1369118X.2012.678878.
Bucher, Taina. “The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms.” Information, Communication & Society, vol. 20, no. 1, Jan 2017, pp. 30–44, https://doi.org/10.1080/1369118X.2016.1154086.
Butler, Judith. “Performative Acts and Gender Constitution: An Essay in Phenomenology and Feminist Theory.” Performing Feminisms: Feminist Critical Theory and Theatre, edited by Sue-Ellen Case, John Hopkins University Press, 1990, pp. 222–39.
Constant. [Version 0.1] A Feminist Server. 2014, https://transhackfeminist.noblogs.org/post/2014/06/03/version-0-1-a-feminist-server-constantvzw/. Accessed 19 June 2023.
Couldry, Nick, and Ulises Mejias. The Cost of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press, 2019.
Devendorf, Laura, and Elizabeth Goodman. The Algorithm Multiple, the Algorithm Material: Reconstructing Creative Practice. https://www.slideshare.net/egoodman/the-algorithm-multiple-the-algorithm-material-reconstructing-creative-practice. The Contours of Algorithmic Life, UC Davis, 2015.
Eubanks, Virginia. Automating Inequality. How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2018.
Gill, Rosalind, and Akane Kanay. “Mediating Neoliberal Capitalism: Affect, Subjectivity and Inequality.” Journal of Communication, vol. 68, no. 2, Apr 2018, pp. 318–26, https://doi.org/10.1093/joc/jqy002.
Goriunova, Olga. “The Digital Subject: People as Data as Persons.” Theory, Culture and Society, vol. 36, no. 6, 2019, pp. 125–45, https://doi.org/10.1177/0263276419840409.
---. “Uploading Our Libraries: The Subjects of Art and Knowledge Commons.” Aesthetics of the Commons, edited by Felix Stalder et al., Diaphanes, 2021, pp. 41–61.
Hall, Stuart. “The Toad in the Garden: Thatcherism among the Theorists.” Marxism and the Interpretation of Culture, edited by Cary Nelson and Lawrence Grossberg, Macmillan Education, 1988, pp. 35–57.
Hofmüller, Reni, et al. Are You Being Served? (Notebooks). Constant, 2014, https://areyoubeingserved.constantvzw.org/AreYouBeingServed.pdf.
Iliadis, Andrew, and Federica Russo. “Critical Data Studies: An Introduction.” Big Data & Society, no. 1–7, Oct 2016, https://doi.org/10.1177/2053951716674238.
Iqbal, Mansoor. “Twitter Revenue and Usage Statistics (2023).” Business of Apps, 2 May 2023, https://www.businessofapps.com/data/twitter-statistics/.
Karizat, Nadia, et al. “Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance.” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, 2021, p. 1.
Lialina, Olia. Turing Complete User: Resisting Alienation in Human-Computer-Interaction. Heidelberg University Publishing, 2021.
List of Feminist Servers. History of Anarchaserver and Feminists Servers visit this section. https://alexandria.anarchaserver.org/index.php/History_of_Anarchaserver_and_Feminists_Servers_visit_this_section#List_of_Feminist_servers. Accessed 19 June 2023.
Mansoux, Aymeric, and Roel Roscam Abbing. “Seven Theses on the Fediverse and the Becoming of FLOSS.” The Eternal Network: The Ends and Becomings of Network Culture, edited by Kristoffer Gansing and Inga Luchs, Institute of Network Cultures / transmediale e.V., 2020, pp. 125–40.
McNeil, Joanne. Lurking. How a Person Became a User. MCD, 2020.
Monroe, Dwayn. “Seeding the Cloud.” Logic, no. 16, 27 Mar 2022, pp. 91–102.
Niederberger, Shusha. “Feminist Server—Visibility and Functionality: Digital Infrastructure as a Common Project.” Springerin | Hefte Für Gegenwartskunst, no. 4, Aug 2019, pp. 8–9.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016.
Rader, Emilee, and Rebecca Gray. “Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed.” Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Apr 2015, pp. 173–82, https://doi.org/10.1145/2702123.2702174.
Rouvroy, Antoinette. “The End(s) of Critique: Data-Behaviourism vs. Due-Process.” Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology, edited by Katja De Vries and Mireille Hildebrandt, Routledge, 2013, pp. 143–65.
Seaver, Nick. “Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems.” Big Data & Society, vol. 4, no. 2, Nov 2017, https://doi.org/10.1177/2053951717738104.
Siles, Ignacio, et al. “The Mutual Domestication of Users and Algorithmic Recommendations on Netflix.” Communication, Culture & Critique, vol. 12, no. 4, Dec 2019, pp. 499–518, https://doi.org/10.1093/ccc/tcz025.
Silverman, David, and Paul Atkinson. “Kundera’s Immortality: The Interview Society and the Invention of the Self.” Qualitative Inquiry, vol. 3, no. 3, Sep 1997, pp. 304–25.
Snelting, Femke, and spideralex. Forms of Ongoingness. Interview by Cornelia Sollfrank, Video, transcript, 16 Sept. 2018, http://creatingcommons.zhdk.ch/forms-of-ongoingness/.
spideralex. “CREATING NEW WORLDS with Cyberfeminist Ideas and Practices.” Beautiful Warriors: Technofeminist Praxis in the Twenty-First Century, edited by Cornelia Sollfrank, Minor Compositions, 2020, pp. 35–56.
Srnicek, Nick. Platform Capitalism. Polity Press, 2016.
nate wessalowski
& Mara Karagianni
From Feminist Servers to Feminist Federation
From Feminist Servers to Feminist Federation
Abstract
Situated within the technofeminist care practices of feminist servers, this text explores the possibilities of feminist federation. Speaking from our collective practice of system administration, we start by introducing Systerserver, laying out the feminist pedagogies that inform our practice of learning and doing together with technologies and the politics of maintenance and care. We then revisit the identity politics of feminist servers as more than safe/r spaces in the cis-male-dominated domain of free/libre and open source software communities. Finally, we reflect on our experiences of building and federating a feminist video platform with the PeerTube software on Systerserver. Facing the techno-social challenges around the protocol of federation and adapting the software alongside our federating practice, we focus on sustainable and care-oriented alternatives to ‘scaling up’ the affective infrastructures of our feminist servers.
We never know how our small activities will affect others through the invisible fabric of our connectedness. In this exquisitely connected world, it's never a question of 'critical mass'. It's always about critical connections. – Grace Lee Boggs
Introduction
In this text we adopt practices of weaving feminist networks of solidarity and care[1] in the age of hybrid on- and offline world-making (Haraway 35f). More specifically, we investigate the possibilities of growing into a feminist federation, which accompany the continuation of a feminist video platform project based on the PeerTube software (tube.systerserver.net). The idea of installing, maintaining and adapting PeerTube in order to build a feminist video platform emerged from the closely knit collaboration of three feminist servers: Anarchaserver (anarchaserver.org) Systerserver (systerserver.net) and Leverburns (terminal.leverburns.blue). Each of these servers maintains free and open source software that supports different ways of technopolitical organizing, from media cloud hosting and tools for the creation of polls, to web hosting for archived cyber-/ technofeminist websites. While some of the sysadmins involved in the installation of PeerTube are or have been involved with two or even all three feminist servers, Anarchaserver and Leverburns mainly supported the project with their tools, while the PeerTube platform was realized through and on Systerserver. For this reason, we focus on the practices around Systerserver and the group of system administrators (sysadmins) actively involved in the PeerTube project. The authors and contributors to this text are women, trans and non-binary people currently part of Systerserver and with different geolocations in Europe. Systerserver organizes mainly through self-hosted mailing lists,[2] video calls and other tools that enable shared working sessions and occasional meetings in person during feminist hacking or other, project-related events.
The video platform was set up with the support of a Belgian art fund received in 2021, not as a permanent infrastructure but as an experimental process for sharing artistic videos and live streaming. A year later, when the funded period came to an end, two things became clear: although there was a need from video-makers[3] to host their art and content in feminist and community-based environments, we didn’t want to become yet another centralized service infrastructure. Instead, awarded with another grant by a Dutch design fund, we set out to enable other collectives to host their own infrastructures and become part ofz an emerging feminist federation of video platforms.
The process of writing about the possibilities of feminist federation started with Systerserver’s participation in the Minor Tech workshop,[4] where questions around scalability were discussed and researched. ‘Scalability’ is more than just a descriptive category: it has also been infused with the ethical obligation to facilitate participation (Sterne VII), namely to involve as many people as possible, if not to ‘change the world’. In this sense small scale projects are measured by their potential to finally and eventually ‘grow up’ and ‘become major’. Projects or collectives such as feminist servers, which are understood to be ‘niche’ or ‘small scale’, typically involve a limited number of people, known only within certain counterpublics (Travers) or circles of friends. They are not geared towards profit, nor efficiency, and often work with a (trans)local embeddedness, where geographies and cultures come together in virtual and physical spaces, and therefore they cannot be easily replicated. Starting from our practice of system administration and the embodied experiences of collectively building a feminist video platform, we turn to explore the process ‘from feminist servers to feminist federation’. Based on a technofeminist understanding of the political and gendered aspects of technology, we ask how technologies and protocols of decentralized social media networking and federation[5] can facilitate this process. What are the challenges of forming and growing into a feminist federation?
Feminist Servers
Feminist servers are infrastructures for nourishing communities of feminists with an interest in technologies or a digitally mediated, art and/or activist, praxis. They are an embedded techno-social practice, a critical intervention into the human-machine dichotomies, and protagonists of a speculative fiction calling for a feminist internet (spideralex, “Internet Féministe”; Toupin/spideralex). Due to their ‘techno-nature’ they are highly connective, interlinking and forming temporary networks of care and solidarity to exchange knowledge and tools, learn together and become involved with each others’ infrastructure projects.[6] The genealogies of feminist servers are not easy to trace as they form ties and intersections with various movements such as cyber- techno- and trans hack feminisms, women-in-tech initiatives, academic fields around network, media and publishing, autonomous tech collectives and network activism, digital commons enthusiasts, the hacker, self-hosting, free/libre and open source software (FLOSS) movements, Do-it-yourself/together (DIY/T) culture, and feminist cybersecurity and self-defense. The motivations behind the formation of feminist servers often stem from the need for spaces in which lesbians, women, non-binary and trans persons, disidentes de género (gender dissidents), and queers can share knowledge about technology and organize themselves.[7]
Systerserver is one of the earliest known feminist server collectives. The server was launched in 2005 as an initiative of the GenderChanger Academy (Mauro-Flude/Akama 51) founded and composed by a group of women involved in a squatted Internet Cafe/ Hackerspace in Amsterdam (ASCII) during the late 90s (Derieg). GenderChanger Academy was formed, in early 2000s, to “get more women involved in technology”(Genderchangers) by initiating tech skill-sharing workshops.[8] In 2002 the first Eclectic Tech Carnival (/etc) took place – a new format derived from the Amsterdam affiliated network that would enable skill-sharing sessions, workshops and discussions in the shape of self-organized hack meetings across Europe from Croatia to Greece to Serbia, Austria, Romania and Italy.[9] During these mostly annual meetings, Systerserver – while often dormant throughout the rest of the year – was activated as a supportive infrastructure for hosting websites, organizing, learning and archiving. When the frequency of the /etc meetings slowed down – partly due to a crisis in identity politics and remediation of trans-hostility and the inclusion of trans persons – new strategies to keep the server active were sought out. By that time, many people had been involved with Systerserver and most of those who had launched the server were no longer actively participating. In 2021 the current group of sysadmins applied for funds to develop a feminist video platform, in order to sustain the feminist server project and the community around it.
Even though in the context of feminist servers a ‘server’ is not a purely technical term, virtual and physical machines are integral to the techno-social practices which constitute feminist servers. The technical infrastructures of Systerserver, Anarchaserver and Lever Burns are either located within shared activist networks on virtual servers, someone’s home or, in the case of Systerserver at mur.at, within a net culture initiative that has a data room. Some of the servers are stable enough to distribute their services, and this allows the servers to depend on each other, sharing their tools while fostering webs of commitment, responsibility and care.
In resonance with other writings on the subject of feminist servers, (spideralex, “internet féministe”, Niederberger, “Feminist Server”, “Der Server ist das Lagerfeuer”, Mauro-Flude/Akama, “A Feminist Server Stack”, Kleesattel) the following passages trace important aspects of the feminist pedagogies that inform the practices of maintaining a server and building a feminist video platform through Systerserver.
Making (safe/r) spaces for feminist and queer communities
The idea of a feminist server is sometimes linked to the concept of safe/r spaces,[10] which actively oppose patterns of discrimination, taking intersectional safety needs and trust into account. Feminist servers can become safe/r spaces for queer, trans and women-identified persons who experience patriarchal oppressions and violence, especially in the cis male-dominated realm of information technology and digital infrastructures. Most of the time, feminist servers stay intimate, known to small circles of friends and allies with no explicit or formalized politics of invitation. However, with the PeerTube platform Systerserver opened their affective infrastructure to seek out critical connections with other feminists and collectives with a shared interest in self-managed digital infrastructures away from the exposure to harassment, exploitation and censorship inherent to mainstream platforms.[11] During these residencies, we entered into an exchange with the technopolitical desires, vulnerabilities and accessibility needs of different modes of inhabiting our feminist video platform. Together with Broken House (broken_house account), a community tool for sex-positive artists and porn makers in Berlin, we realized an unlisted and invite-only 24-hours streaming event that showcased a collage of post-porn art, archival material and video clips. The artists felt comfortable hosting a sensitive event on a feminist server, because knowing the people behind the machine, and knowing that the streaming remains unlisted, established a shared trust. Another residency with the design research collective for disability justice MELT (meltionary.com) resulted in an illustrated video about a project called ACCESS SERVER, which included sign language and was published as multiple versions of one video, each with a different set of subtitles.
Feminist critique of FLOSS: Choosing our dependencies[12]
The PeerTube software that we installed on Systerserver is free software for the creation of video and streaming platforms, which is maintained and developed by the French non-profit Framasoft initiative. PeerTube forms part of FLOSS, an umbrella term for free and open source software such as the Linux kernel, Firefox web browser, NextCloud or Signal Messenger. Freedoms are granted through licenses such as the GPL (General Public License) or, in the case of PeerTube, Affero GPL.[13] By circumventing existing proprietary copyright regimes, this allows everyone with the necessary skills to run, study, improve and distribute the software. Feminist servers – whenever we can – run and adapt free and open source software with regards to our specific and embodied needs. Free software aligns politically with feminist servers’ core values, such as sharing knowledge, empowering each other and working against power hierarchies based on gatekeeping, access to resources, tools and knowledge, as it allows them to run the software for themselves and on their machines (see also Snelting/spideralex 4, with reference to Laurence Rassel, Niederberger, “Der Server ist das Lagerfeuer” 7f). This is a form of emancipation from centralized or autonomous tech infrastructures, which are often administered by cis men, which thus challenges the historical attribution of femininity as something in opposition to technology, and the power awarded through technological proficiency (Travers 225, citing Cockburn). Free software therefore allows for bypassing the power monopolies held by tech corporations under the matrix of patriarchal techno domination. Despite continuous efforts to address the diversity of identities in FLOSS development,[14] however, only around 10 percent of contributions in FLOSS stem from women (Bosu/Sultana). These injustices are rooted in interrelated causes that form access barriers, such as sexist bias (Terrell/ Kofink/ Middleton/ Rainear/Murphy-Hill/ Parnin/ Stallings) and toxic behavior paired with the refusal to acknowledge forms of discrimination (‘gender blindness’) given the supposedly open nature of FLOSS projects (Nafus). Feminists have also pointed to factors such as the unequal distribution of care work and unequal wages resulting in an imbalance regarding free time for contributing volunteer work. Many digital infrastructure projects, even though in theory open for anyone to participate, are therefore prone to reinforcing mechanisms of exclusion and power hierarchies alongside intersectional patterns of marginalization (Dunbar-Hester 3f).
Maintenance as Care
Computer science and IT industry culture has tried to distinguish between software development as creative work in contrast to the tedious labor of software maintenance (Hilfling Ritasdatter 156f).[15] This distinction also applies to sysadmin work, which is mostly about maintaining, repairing and updating infrastructure and thus shares many characteristics with invisiblized, racialized and feminized care work (Tronto 112-114). The problems of devaluation are rooted within the intricacies of the server-client relationship, as well as the ‘software as service’ or cloud paradigm. The questions “Who is serving whom? Who is serving what? What is serving whom?” lie therefore at the center of the critical practice around feminist servers, which “radically question the conditions for serving and service; they experiment with changing client-server, user-device and guest-host-ghost relations where they can.” (Transfeminist Wishlist).
Practices of care and maintenance within feminist servers must be understood as negotiations of collective responsibility. One important agreement for Systerserver is the no-pressure policy, which allows its sysadmins to participate according to their availabilities and thereby extends the principle of care towards themselves by taking into account the different intersectional precarities that define their situation. Contributions to the maintenance of the machine, and to the social relations around it, entail security upgrades, hardware replacements, backups, data migrations, and attentive documentation. In the case of the Systerserver video platform, this includes adapting the software to the needs of its community and specific use cases, curating new accounts, updating the platform’s code of conduct and communicating changes to the inhabitants of the platform. Nonetheless, the attitude of feminist servers’ work does not comply with the superimposed specters of seamlessness, infinite resources and the nonstop availability of computing.[16]
Affective Infrastructures
Feminist servers are often described in terms of digital, material and discursive or speculative infrastructures, which ties in many of the above mentioned aspects around making space, looking into issues of safety, trust, access and questions of being served, as well as maintenance and care (Niederberger, “Feminist Server”). Cultural theorist Lauren Berlant writes that “the question of politics becomes identical with the reinvention of infrastructures for managing the unevenness, ambivalence, violence, and ordinary contingency of contemporary existence.” (Berlant 394) To her, building and maintaining infrastructures is a way of doing (techno) politics, as infrastructures shape and organize the social relations that form around them. While critiquing the dismissal of the material nature of ‘cyberspace’, an infrastructural approach can sometimes tilt into prioritizing the technical over the social aspects. This is why some of us understand feminist servers in terms of affective infrastructure, foregrounding acts of community-based maintenance and affective labor. Everyday and mundane repair necessary for when things break down, can – in small and multiple increments – lead to larger changes in knowledge production (Hilfling 168 with reference to Graham and Thrift)
Affective infrastructures suggest a different relation to tools and data, an “added layer of intimacy” (Motskobili 9) based on the collective practice of hosting and adapting software to meet our needs and desires. In reference to the histories of queer resistance and the re-appropriation of the ‘pink triangle’ (Jensen) by the queer community, Systerserver’s video platform adapted the pink triangle as a deconstructed PeerTube logo: one of its tactics of designing a queer-friendly interface. This also changes the practices of engaging with the infrastructures as a “space that we want to inhabit, as inhabitants, where we make a contribution, nurturing a safe space and a place for creativity and experimentation, a place for hacking heteronormativity and patriarchy.” (Snelting/spideralex 5)
Feminist Federation
After the first phase of the PeerTube platform was implemented on Systerserver and a curated period of try-outs had come to an end, questions regarding the continuation and maintenance of the video platform as well as long-term availability arose. While the response from the resident artists and collectives was very encouraging, growing Systerserver’s video platform into a more visible instance[17] did not align with the sysadmin’s capacities, resources, and interests. Thus, instead of taking up more responsibility as a ‘single point of service’ and adopting the naturalized logic of ‘scaling up’, Systerserver decided to explore a different path to nurturing feminist communities: the formation of a feminist federation. This is an ongoing process that, at the time of writing, has just started to unfold. This text can thus only provide a preliminary outline of what a feminist federation on the basis of the PeerTube software might eventually grow into.
PeerTube is based on the open communication protocol ActivityPub ("What is ActivityPub"), which allows a video platform to connect not just with other PeerTube platforms, but with all social networks and other media instances based on the same protocol. The technosocial agreement behind this is called federation, which is characteristic of the fediverse:[18] a decentralized network of currently around 50 different types of social media such as Mastodon (microblogging), Mobilizon (event management), Funkwhale (sound/audio hosting) or Pixelfed (image hosting).[19] Through federation, content such as microblogging or files (images, documents, videos) that are hosted on one instance can be accessible from another. All instances within the fediverse are maintained by a collective or individual sysadmins, who can open their infrastructures to a community of participants according to their politics of invitation (e.g. open access or invite-only) and who can adopt or fork[20] the software, propose a code of conduct or make design choices for their instance.
The concept of federation originally derives from a political theory of networks in which power, resources and responsibilities are shared between actors, thus circumventing the centralization of authority (Mansoux and Roscam Abbing). When this is implemented within alternative social networks, Robert Gehl and Diana Zulli have argued that it can maintain the local autonomy of all instances while at the same time strengthening the collective commitment to an ethical code fostering connection and exchange. They have linked the politics behind federated social media to the concept of the covenant, a federalist political theory developed by Daniel Elazar (Gehl and Zulli 3). A covenant is an agreement to (self-) governance by a group of people, and it is based on shared ethical choices.[22] Participants’ consent is actively and continuously negotiated, which means in the case of the fediverse that instances can freely choose to either leave or join the fediverse by federating with other instances (Gehl and Zulli 4). This capacity for consensual engagement and autonomous boundary setting aligns with feminist servers’ technofeminist desire for autonomous infrastructures and choosing our own dependencies. Not only does PeerTube software as part of FLOSS allow us to create a safe/r space on our machines, but the application of an open protocol such as ActivityPub also establishes a technosocial base that effectively enables growing bonds among different feminist communities. Here connection becomes a consensual choice, not a forced commitment or a default that is hard to reverse. Even after federating with each other, connections can be dissolved (‘defederated’) at any time – for example in the case of irreconcilable safety needs or in the face of diverging values – leaving instances with the ability to self-determine and negotiate their boundaries according to their needs. Their ability to consent is tied to the formation of non-hierarchical bonds that presuppose the absence of undesired dependencies or power relations.
PeerTube has an opt-in federation style, meaning that after a new installation of the PeerTube software, the instance is neither followed by nor following other instances and is therefore only hosting its own inhabitants and contents. In order to federate, the administrators of the instance accept so-called ‘follow requests’, and follow other instances with whom they would like to share content.[23] After the initial setup of PeerTube, Systerserver’s community started to look for instances with whom to federate and share their content, but realized that there were hardly any queer or feminist platforms around. Considering that PeerTube and even the fediverse are not widely known and due to their closeness to the cis male-dominated FLOSS communities and the demanding prerequisites for the installation and maintenance, this is not very surprising. However, it has consequences for the feminist appropriation of the principles and technosocial protocols of federation. In order for Systerserver to federate its platform, it is necessary to take on an empowering and pedagogical approach, transcending the retrospective logic of ‘connecting’ something that already exists by growing relational networks of solidarity and care into supporting the making of video infrastructures embedded in other localities.
Looking into this kind of resonance with other communities, Systerserver started to facilitate and participate in setting up two new video platforms:[24] one at Ca la Dona, a feminist community center in Barcelona and one with Broken House, the Berlin-based community tool with which Systerserver had already collaborated in the form of a residency when first setting up the PeerTube platform. The installation and federating processes are part of two week-long programs, each carried out together with the local communities.[25] Once the platforms are up and federated, they aggregate the content of each community’s platform through the web interface of the other platforms. However, this is only one of the ways in which critical connections between feminist and queer communities can manifest themselves within a feminist federation. Another important aspect is the facilitation of networks of solidarity and care among the participants. These kinds of networks can grow by meeting each other and forming relationships that can facilitate the exchange of knowledges, support, advice and resources. In doing so, this can result in the formation of a covenant of platforms who agree to federate with each other alongside certain core values or upon a shared code of conduct.
Supporting local communities in the endeavors of building up their own technopolitical infrastructures comes with the challenges of meeting other spatial and cultural realities as well as getting to know about different needs tied to the context and motivations behind building a video platform. In the case of Ca la Dona, the local community and space was able to reactivate old hardware (rack servers) donated to the space and install their PeerTube instance on an in-house server.[26] However, issues arose with regard to the excessive energy consumption of the old hardware and the lack of a stable network interface to the outside. In the case of Broken House, which is the coming collaboration, challenges that lie ahead range from choosing a hosting provider for renting a server, to ensuring that the local community can establish connections with people who are motivated to learn and support with administering the server.
While adapting PeerTube software to our community needs, we faced two shortfalls: one was the lack of group accounts, and the other the unchecked power of administrators and moderators over the inhabitants’ data and invitation to federate. Group accounts are valuable to communities, especially the most vulnerable ones such as feminist, queer and trans communities, as it enhances anonymity within a group and reduces toxic attacks directed to single persons. ActivityPub has yet to implement accounts for a group of people.[27] Christine Lemmer-Webber, lead author of ActivityPub protocol, notes “that the team predominantly identified as queer, which led to features that help users and administrators protect against ‘undesired interaction’.”[28] However ActivityPub and PeerTube are still centered around individual creators and do not yet support group accounts or community video channels, even though the community has been asking for this since 2018.[29]
In his book Platform Socialism, James Muldoon suggests that we should shift our concerns from “privacy, data and size”, and claim the “power, ownership and control” over our digital media (Muldoon 2). Whereas in the case of federated social networks there is an empowering dimension at play as activists start to collectively govern part of the infrastructure, there is an asymmetric power balance between inhabitants and administrators/moderators when it comes to owning our data. Fediverse allows for a social design of privacy by putting effort into providing finer moderation tools (Mansoux and Roscam Abbing 132-33), such as visibility preferences for posts and defederation by blocking other instances. However, by default sysadmins and moderators have access to unencrypted user messages and databases as well as graphs of interactions (Budington). This is why Sarah Jamie Lewis has called for a distribution of powers, such as a privacy preserving persistence layer removed from any specific application:
You need that first persistence layer to be communal and privacy preserving to prevent any entity being in a position do something like all the DMs on this instance are readable by whoever admins it. – Sarah Jamie Lewis
Recent technological developments of encrypted social networks (a hybrid of federation and peer-2-peer) have emerged and are in the making.[30] However, technical contributions in federated social networks remain dominated by a specific group of developers, still missing out in terms of gender and ethnic diversity.[31] This may account for why the design of the more widespread federated social networks falls short in aspects of privacy and group accounts, whose importance for community safety have not been addressed yet.
From where we stand now and according to the resources available to us, we choose to focus on the social and technopolitical aspects of caring for our infrastructures and growing into a feminist federation, rather than on the development of the software itself. This means that we make do with the existing open protocol of ActivityPub and the PeerTube software, which we can adopt in accordance with our basic needs for free software, autonomous safe/r spaces and the possibilities for sustainably growing our affective infrastructures. Nevertheless, we also engage in a closer investigation of the development and debates of and around PeerTube and ActivityPub and their open source communities, such as in writing this text.
Outro: How not to scale but resonate
The anthropologist Anna Lowenhaupt Tsing has criticized the prevalent conceptualization of ‘scalability’ by pointing out how projects of scale are often implicated in extractivist, colonialist and exploitative modes of production. She defines scalability as characteristic of something that can expand without transforming and is therefore prone to rendering surrounding landscape and nature (including humans) into mere resources (Tsing “Nonscalability” 507). Thus the idea of scalability is not compatible and even in conflict with the situated, power-sensitive and non-exploitative approaches that characterize feminist servers. And while the values of feminist servers lie precisely in their nonscalable qualities, accounting for the embodied needs of people, landscapes and machines, this does not make them isolated ‘niche’ phenomena. Instead feminist servers since the beginnings have set out to explore nonscalable ways of forming networks of solidarity and care among themselves and beyond. Among those, this text has explored the beginnings of a feminist federation as one possible mode of reaching out and growing – not in the distorted sense of infinite progress, but in sustainable and careful ways. In the face of both structural and particular precarities, this implies getting to know and strengthening each others’ communities in the process of federating and creating fruitful ways of exchange and mutual support. The roles that Systerserver takes in facilitating local communities before, during and after the installation of PeerTube, are part of a collective learning process, which informs our feminist pedagogies.
This shared effort may at some point result in a covenant with a more explicitly shared set of values articulated from within the feminist federation and in collaboration with all the communities that participate in it. It will reflect a process of learning to maintain feminist infrastructures according to the local needs and context from which each community comes together. This is what we may call the resonance of queer and feminist voices, facilitating and hearing each other out in order to find common ground in recognizing the differences. We do this by engaging in political debates and by establishing critical connections with allies, continuing our efforts of caring for our feminist digital infrastructures now and in the long run. Systerserver’s ongoing experimentation with the possibilities of a feminist federation can be understood as the interplay between a social and artistic embodiment of a technological protocol that allows content to be streamed, accessed and exchanged between servers. But while the idea behind most social networking protocols is to establish as many connections as possible, feminist federation embraces a more hesitant and critical mode of connecting, and is only interested in federating with others who share our approach of queering technopolitics.
As a collaborative effort to think and speak about some of the intricacies of caring for machines and bodies in the context of feminist servers, this text can only be an articulative exercise. It will accompany but never capture or represent what it is that some of us are doing or how some of us find meaning in what it is we are doing. Instead it becomes part of our collective processes of developing and sharing knowledge and skills around feminist appropriations of free software, technopolitical tools for organizing, and feminist pedagogies. Feminist servers adopt the ideas of FLOSS and other tech communities where disempowered users can become (code) contributors, system admins and hackers by choosing their own dependencies and enabling communities into becoming infrastructure makers and maintainers. In experimenting and engaging with modes of feminist federation, we aim to reach out and share our knowledges, thereby becoming a little more visible. Doing so also allowed us to document and reflect on our practice and to speculate and make space for questions and articulations that might guide further paths and developments. Feminist servers and modes of federation can support us in our needs and amidst the “ruins of capitalism” (Tsing, “End of the World”). They make space for ways of relating differently to each other and (with) technology.
Acknowledgements
The following sysadmins from a network of feminist servers contributed to the collaborative writing process and previously published versions: ooooo - transuniversal constellation, vo ezn - sound && infrastructure artist, Mara Karagianni - artist and software developer, nate wessalowski - technofeminist researcher and doctoral student.
English correction by Aileen Derieg.
Authors and contributors form part of a wider ecosystem of techno-/ cyberfeminists, sysadmins and allies, mostly across Europe and Abya Yala, South America.
Many thanks to the organizers and reviewers of Minor Tech in giving us the chance to articulate our praxis.
Notes
- ↑ Formulation following spideralex, "Feministische Infrastruktur" 59.
- ↑ The following lists are part of the extensive network of feminist servers: Adminsysters, https://lists.genderchangers.org/mailman/listinfo/adminsysters; Eclectic Tech Carnival, https://lists.eclectictechcarnival.org/mailman/listinfo/etc-int; Femservers, https://lists.systerserver.net/mailman3/lists/femservers.lists.systerserver.net/.
- ↑ Videomakers had gotten in touch with Systerserver’s video platform via the residencies and the TransHackFeminism Covergence, https://zoiahorn.anarchaserver.org/thf2022/bienvenides-a-la-convergencia-transhackfeminista-2022/.
- ↑ Minor Tech workshop facilitated by Transmediale 2023, https://aprja.net//announcement/view/1034.
- ↑ Overview of software and protocols for distributed and decentralized social networking, https://en.wikipedia.org/wiki/Comparison_of_software_and_protocols_for_distributed_social_networking.
- ↑ For an extensive list of feminst servers, see https://alexandria.anarchaserver.org/index.php/You_can_check_some_of_their_services_in_this_section.
- ↑ While some feminist infrastructure projects are open to feminists of all genders, most of them - like Systerserver - are shaped by a separatist approach that excludes cis men from participating. We do this in order to create spaces where we don’t have to constantly worry about being gendered as ‘other to men’. Many of the ways we relate to and behave around cis men are deeply rooted in our cultural memories: counteracting male violences or carelessness, feeling pressured into proving to be ‘as good as men’, falling back into patterns of serving or pleasing men or just not taking the space due to fear of pushback. Excluding cis men is of course not a sufficient criteria for creating spaces without patriarchal violence but our experiences have taught us that it can be very liberating. Besides, cis men have many opportunities to engage in mixed/all gender tech related activism.
- ↑ The adapter they are named after is a device that changes the ascribed ‘orientation’ of a port – both stressing the always gendered aspects of technology as well as the urgent need to reverse and counteract the cis male domination of technological domains.
- ↑ More information about the /etc and past events see https://eclectictechcarnival.org/ETC2019/archive/.
- ↑ The concept of safe/r spaces dates back to the heyday of the second wave of feminism when lesbians, trans people and women started organizing within and through woman only spaces. It has since been adopted to online spaces, see Katrin Kämpf, “Safe Spaces”.
- ↑ About video monetization and censorship on YouTube, see Mara Karagianni, “Software as Dispute Resolution System: Design, Effect and Cultural Monetization”.
- ↑ Formulation following “A Feminist Server Manifesto”.
- ↑ Affero GPL has an extra provision that addresses the use of software over a computer network (such as a web application), and requires the full source code be accessible to any network user of the AGPL-licensed software. “Affero General Public License”. In Wikipedia, accessed June 4, 2023, https://en.wikipedia.org/wiki/Affero_General_Public_License.
- ↑ See, e.g., the artist project “Read The Feminist Manual” about gender discrimination in FLOSS, an online governance research organized by the Media Enterprise Design Lab of Boulder University of Colorado, accessed on May 21, 2023, https://excavations.digital/projects/read-feminist-manual/.
- ↑ In chapter III on Maintenance, Hilfling Ritasdatter critically contests the differences between unproductive labor, which sustains life, and creative work that produces and changes the world, as those have been articulated by various political theorists such as Hannah Arendt’s The Human Condition (Hilfling Ritasdatter 149), See also the distinction between development and maintenance in the “Manifesto for Maintenance Art” from 1969 by Mierle Laderman Ukeles, talked about in the context of feminist servers by Ines Kleesattel, 184f.
- ↑ See also "A Feminist Server Manifesto" where it states that “A feminist server... tries hard not to apologize when she is sometimes not available.”
- ↑ Instance is the term for a particular installation of a software on a server.
- ↑ The word ‘fediverse’ is a lexicon blend of federation and universe, “Fediverse”, in Wikipedia, last modified May 27, 2023, https://en.wikipedia.org/wiki/Fediverse.
- ↑ An easy way to explain federated media is through the concept of email providers, https://docs.joinmastodon.org/#federation.
- ↑ In FLOSS environments, forking describes the copying, modification and development of a software in a way that differs from the previous creators’ or the maintainers’ projects and is often accompanied by a splitting of communities.
- ↑ How the Fediverse connects, image creators Imke Senst, Mike Kuketz, licenses Creative Commons Attribution-Share Alike 4.0 International, https://social.tchncs.de/@kuketzblog/107045136773063674.
- ↑ Convenantal federation is distinguished from contract federation, which is based on legal texts and institutional laws.
- ↑ This is different from Mastodon, where a kind of convenant is in place. Here, instances are federated per default with other instances which commit to a shared set of rules such as moderation against racism, sexism, trans- and homophobia or daily backups of all data and posts. Accessed May 26, 2023, https://joinmastodon.org/covenant.
- ↑ Systerserver received financial support for this undertaking as part of the 360 Degrees of Proximities project by the Dutch Creative Industries Funds.
- ↑ For more details about the collaboration, see https://mur.at/project/syster360/.
- ↑ In house server means that is physically located in a space vs a cloud server, accessed June 3, 2023, https://www.techsafety.org/inhouse-vs-cloud.
- ↑ Looking into the development history from OStatus and its implementation in previous decentralized social networks, the group feature was dropped in 2013. From a user’s comment in the pump.io social network code repository we read: “This is a major drawback since the migration. We were using the ‘koumbitstatus’ group to do status updates for our network in a decentralised way, on some servers outside of our main infrastructure. This functionality is now completely gone. While I think now that we shouldn’t have relied on identi.ca for that service, I was expecting the ‘federation’ bit to survive the migration: I post those notices from my home statusnet server, and the fact that those don't communicate at all anymore makes this a very difficult migration. This will clearly make us hesitant in using pump.io or any other federated protocol (as opposed to say: a simple html page with rss feeds) to post our updates.” Accessed on May 28, 2023, https://github.com/pump-io/pump.io/issues/299.
- ↑ In January 2018, the World Wide Web Consortium (W3C) published the ActivityPub standard as a recommendation.
- ↑ The request for group accounts has been open on the GitHub code repository of PeerTube since 2018, and there is a long thread of users requesting this feature. In one of the comments we read: "IMHO it would be a good thing to promote collaborative creation. It would be another way to offer something different from Youtube (which is centered on individuals)." Accessed on May 28, 2023, https://github.com/Chocobozzz/PeerTube/issues/699.
- ↑ See Bluesky (https://blueskyweb.xyz/blog/3-6-2022-a-self-authenticating-social-protocol) and Manyverse (https://www.manyver.se/).
- ↑ Looking at the forum of ActivityPub, most people who have profile pictures and are the most active seem to be white men, https://socialhub.activitypub.rocks/.
Works cited
A Feminist Server Manifesto. 2014, https://areyoubeingserved.constantvzw.org/Summit_afterlife.xhtml. Accessed 8 June 2023.
anarchaserver.org. Accessed 8 June 2023.
A Wishlist for Trans*Feminist Servers. 2022, https://etherpad.mur.at/p/tfs.
Berlant, Lauren. "The Commons: Infrastructures for Troubling Times". Environment and Planning D: Society and Space 34, no. 3 (June 2016): 393–419. https://doi.org/10.1177/0263775816645989.
Bosu, Amiangshu, and Kazi Zakia Sultana. ‘Diversity and Inclusion in Open Source Software (OSS) Projects: Where Do We Stand?’ In 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 1–11. Porto de Galinhas, Recife, Brazil: IEEE, 2019. https://doi.org/10.1109/ESEM.2019.8870179.
Budington, Bill. ‘Is Mastodon Private and Secure? Let’s Take a Look’. Electronic Frontier Foundation, 16 November 2022. https://www.eff.org/deeplinks/2022/11/mastodon-private-and-secure-lets-take-look.
broken_house account, https://tube.systerserver.net/a/broken_house/video-channels. Accessed 8 June 2023.
Cockburn, Cynthia. Machinery of Dominance: Women, Men, and Technical Know-How. Pluto Press, 1985.
Derieg, Aileen. "Tech Women Crashing Computers and Preconceptions". Instituant Practices - Transversal Texts, July 2007. https://transversal.at/transversal/0707/derieg/en.
Dunbar-Hester, Christina. "Hacking Technology, Hacking Communities: Codes of Conduct and Community Standards in Open Source". MIT Case Studies in Social and Ethical Responsibilities of Computing, no. Summer 2021 (10 August 2021). https://doi.org/10.21428/2c646de5.07bc6308.
Gehl, Robert W., and Diana Zulli. "The Digital Covenant: Non-Centralized Platform Governance on the Mastodon Social Network". Information, Communication & Society (15 December 2022): 1–17. https://doi.org/10.1080/1369118X.2022.2147400.
“Genderchangers,” last updated February 12, 2014. https://www.genderchangers.org/faq.html.
Haraway, Donna. Staying with the Trouble: Making Kin in the Chthulucene. Experimental Futures: Technological Lives, Scientific Arts, Anthropological Voices. Duke University Press, 2016.
Hilfling Ritasdatter, Linda. Unwrapping Cobol: Lessons in Crisis Computing. Malmö University, 2020.
Jamie Lewis, Sarah. Queer Privacy Essays from the Margins Of Society, 2017. https://ia600707.us.archive.org/7/items/Sarah-Jamie-Lewis-Queer-Privacy/Sarah%20Jamie%20Lewis%20-%20Queer%20Privacy.pdf.
Jensen, Erik. ‘Pink Triangle’. In The International Encyclopedia of Human Sexuality, edited by Anne Bolin and Patricia Whelehan, 861–1042. Oxford, UK: John Wiley & Sons, Ltd, 2015.
Kämpf, Katrin M. ‘Safe Spaces, Self-Care and Empowerment – Netzfeminismus im Sicherheitsdispositiv’. FEMINA POLITICA – Zeitschrift für feministische Politikwissenschaft 23, no. 2 (17 November 2014): 71–83. https://doi.org/10.3224/feminapolitica.v23i2.17615.
Karagianni, Mara. "Software as Dispute Resolution System: Design, Effect and Cultural Monetization". Computational Culture, no. 7 (21 October 2019). http://computationalculture.net/software-as-dispute-resolution-system-design-effect-and-cultural-monetization/.
Kleesattel, Ines. "Situated Aesthetics for Relational Critique on Messy Entanglements from Maintenance Art to Feminist Server Art". In Aesthetics of the Commons, edited by Cornelia Sollfrank, Felix Stalder, and Shusha Niederberger. Diaphanes, 2021.
Lewis, Sarah Jamie. "Federation is still the Worst of all Worlds | Pseudorandom." April 25, 2022. https://pseudorandom.resistant.tech/federation-is-the-worst-of-all-worlds.html.
Mansoux, Aymeric, and Roel Roscam Abbing. "Seven Theses on the Fediverse and the Becoming of FLOSS". In The Eternal Network: The Ends and Becomings of Network Culture, 124–40. Institute for Network Cultures and Transmediale, 2020.
Mauro-Flude, Nancy, and Yoko Akama. "A Feminist Server Stack: Co-Designing Feminist Web Servers to Reimagine Internet Futures", 2022.
meltionary.com. Accessed 8 June 2023.
Motskobili, Mika. ‘LEVER BURNS’. Piet Zwart Institute, Willem de Kooning Academy, 2021. https://project.xpub.nl/lever_burns/pdf/Ezn_LeverBurns.pdf.
Muldoon, James. Platform Socialism: How to Reclaim Our Digital Future from Big Tech. Pluto Press, 2022. https://doi.org/10.2307/j.ctv272454p.
Nafus, Dawn. "'Patches don’t have Gender': What is not Open in Open Source Software". New Media & Society 14, no. 4 (June 2012): 669–83. https://doi.org/10.1177/1461444811422887.
Niederberger, Shusha. "Feminist Server – Visibility and Functionality – Creating Commons", 2019. https://creatingcommons.zhdk.ch/feminist-server-visibility-and-functionality/index.html.
———. "Der Server ist das Lagerfeuer. Feministische Infrastrukturkritik, Gemeinschaftlichkeit und das kulturelle Paradigma von Zirkulation in Digitaler Infrastruktur". preprint, 2021.
Snelting, Femke, and spideralex. "Forms of Ongoingness". Interview by Cornelia Sollfrank, 16 November 2016.
spideralex. "Pas d'internet féministe sans serveurs féministes". Interview by Claire Richard, 2019. https://pantherepremiere.org/texte/pas-dinternet-feministe-sans-serveurs-feministes/.
———. "Feministische Infrastruktur aufbauen: Helplines zum Umgang mit geschlechtsspezifischer Gewalt im Internet". In Technopolitiken der Sorge, edited by Christoph Brunner, Grit Lange, and nate wessalowski. transversal, 2023.
Sterne, Jonathan, ed. The Participatory Condition in the Digital Age. Electronic Mediations 51. University of Minnesota Press, 2016.
systerserver.net. Accessed 8 June 2023.
terminal.leverburns.blue. Accessed 8 June 2023.
Terrell, Josh, Andrew Kofink, Justin Middleton, Clarissa Rainear, Emerson Murphy-Hill, Chris Parnin, and Jon Stallings. "Gender Differences and Bias in Open Source: Pull Request Acceptance of Women versus Men". PeerJ Computer Science 3 (1 May 2017). https://doi.org/10.7717/peerj-cs.111.
Toupin, Sophie and spideralex. "Introduction: Radical Feminist Storytelling and Speculative Fiction: Creating New Worlds by Re-Imagining Hacking". Ada: A Journal of Gender, New Media, and Technology, no. 13 (2018). https://doi.org/10.5399/uo/ada.2018.13.1.
Travers, Ann. "Parallel Subaltern Feminist Counterpublics in Cyberspace". Sociological Perspectives 46, no. 2 (June 2003): 223–37. https://doi.org/10.1525/sop.2003.46.2.223.
Tronto, Joan C. Moral Boundaries: A Political Argument for an Ethic of Care. Routledge, 1993.
Tsing, Anna Lowenhaupt. "On Nonscalability: The Living World is not Amenable to Precision-Nested Scales". Common Knowledge 18, no. 3 (1 August 2012): 505–24. https://doi.org/10.1215/0961754X-1630424.
———. The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins. Princeton University Press, 2015.
tube.systerserver.net. Accessed 8 June 2023.
“What is ActivityPub”, 2023, https://docs.joinmastodon.org/#fediverse. Accessed 26 May 2023.
Contributors
Camille Crichlow is a PhD Researcher at the Sarah Parker Remond Centre for the Study of Racism and Racialisation (University College London). Her research interrogates how the historical and socio-cultural narrative of race manifests in contemporary algorithmic technologies.
Teodora Sinziana Fartan is a researcher, computational artist and writer based in London, UK. Her research-artistic practice explores the new spaces of possibility opened up by collaborations between software and storytelling, with a particular focus on the new modes of relational and affective experience rendered into being by the networked data exchanges scripted into interfaces. Driven by speculative fiction, Teodora’s practice explores the immersive, interactive and intelligent more-than-human entanglements that can take shape within algorithmically-mediated spaces. Teodora is currently a PhD Researcher at the Centre for the Study of the Networked Image at London South Bank University and a Lecturer at the University of the Arts London.
Susanne Förster is a PhD candidate and research associate in the project “Agentic Media: Formations of Semi-Autonomy” at the University of Siegen. Her work deals with imaginaries and infrastructures of conversational artificial agents. Previously, she coordinated exhibitions at Haus der Kulturen der Welt (HKW), Berlin.
Inte Gloerich (PhD researcher at Utrecht University and Institute of Network Cultures) explores sociotechnical imaginaries around blockchain technology. Her work involves the politics, artistic imagination, and (counter)cultures surrounding digital technology. She co-edited MoneyLab Reader 2: Overcoming the Hype, State Machines: Reflections and Actions at the Edge of Digital Citizenship, and Feminist Finance Zine & Syllabus.
Mara Karagianni is an artist, software developer and system administrator. Their work involves computational and analogue media for publishing, python programming, making technical user manuals & drawings, and writing about the internet, FOSS and feminism.
Freja Kir is researching across intersections of artistic methods, spatial publishing and digital media environments. Creatively directing fanfare – collective for visual communication. Contributing to stanza – studio for critical publishing. PhD researcher, University of West London.
Jung-Ah Kim is a PhD researcher in Screen Cultures and Curatorial Studies at Queen’s University. She explores various aspects of traditional Korean textiles, including their technology, production, cultural heritage, diaspora, and more.
Inga Luchs is a PhD candidate in Media Studies at the University of Groningen. Inga has obtained her B.A. and M.A. in cultural studies and digital culture at Leuphana University, Lüneburg. Departing from the problem of algorithmic discrimination, she seeks to investigate the key technical principles of machine learning to uncover underlying assumptions and beliefs. ORCID ID: 0000-0002-2731-0549
Alasdair Milne is a PhD researcher with Serpentine Galleries’ Creative AI Lab and King’s College London. His work focuses on the collaborative systems that emerge around new technologies.
Shusha Niederberger is a PhD student at Zurich University of the Arts / Hamburg University of Fine Arts and working on user subject positions in datafied environments (https://latentspaces.zhdk.ch). She has a background in media art practice and art education, and has been researching on digital artistic practices and the commons (http://creatingcommons.zhdk.ch) before.
nate wessalowski is a PhD student and technofeminist researcher at the University of Münster working on alternative data practices in collaboration with feminist server collectives. Based on a background in cultural studies and digital cultures (Universities of Hildesheim and Lüneburg), their work focuses on the epistemologies of datafication, the history and futures of online commons and, most recently, a feminist critique of cybersecurity.
Jack Wilson is a PhD researcher at the University of Warwick’s Centre for Interdisciplinary Methodologies. He is not a conspiracy theorist.
xenodata co-operative investigates image politics, algorithmic culture and technological conditions of knowledge production and governance through art and media practices. The collective was established by curator Yasemin Keskintepe and artist-researcher Sasha Anikina. Together with Luba Elliott, they co-curated the IMPAKT festival 2018 entitled "Algorithmic Superstructures". Alexandra (Sasha) Anikina is a media scholar, artist and film-maker, currently a Senior Lecturer in Media Practices at Winchester School of Art (University of Southampton). Yasemin Keskintepe has curated exhibitions on the politics and poetics of technology at ZKM and German Hygiene-Museum among others, and is currently a PhD candidate at the University of Potsdam.
Sandy Di Yu is a PhD researcher at the University of Sussex and co-managing editor of DiSCo Journal (www.discojournal.com), using digital artist critique to examine shifting experiences of time.