Pdf:APRJA Content Form: Difference between revisions

From creative crowd wiki
Jump to navigation Jump to search
mNo edit summary
No edit summary
Tag: Manual revert
 
(3 intermediate revisions by the same user not shown)
Line 9: Line 9:
<div class="authors">
<div class="authors">


<!-- Manetta Berends & Simon Browne<br> good idea or not? -->
Manetta Berends & Simon Browne<br>
Kendal Beynon<br>
Kendal Beynon<br>
Edoardo Biscossi<br>
Edoardo Biscossi<br>
Line 72: Line 72:


</div>
</div>
<div class="page-break"></div>


<div class="item">
<div class="item">

Latest revision as of 12:24, 29 October 2024

A Peer-Reviewed Journal About

CONTENT/FORM

Manetta Berends & Simon Browne
Kendal Beynon
Edoardo Biscossi
Luca Cacini
Esther Rizo Casado
Pierre Depaz
Marie Naja Lauritzen Dias
Mateus Domingos
Bilyana Palankasova
Asker Bryld Staunæs & Maja Bak Herrie
Denise Helene Sumi

Christian Ulrik Andersen
& Geoff Cox (Eds.)

Volume 13, Issue 1, 2024
ISSN 2245-7755

A Peer-Reviewed Journal About_
ISSN: 2245-7755

Editors: Christian Ulrik Andersen & Geoff Cox
Published by: Digital Aesthetics Research Centre, Aarhus University
Design: Manetta Berends & Simon Browne (CC)
Fonts: Happy Times at the IKOB by Lucas Le Bihan, AllCon by Simon Browne
CC license: ‘Attribution-NonCommercial-ShareAlike’

www.aprja.net


Christian Ulrik Andersen
& Geoff Cox

Editorial

Doing Content/Form

Content cannot be separated from the forms through which it is rendered. If our attachment to standardised forms and formats – served to us by big tech – limit the space for political possibility and collective action, then we ask what alternatives might be envisioned, including for research itself?[1] What does research do in the world, and how best to facilitate meaningful intervention with attention to content and form? Perhaps what is missing is a stronger account of the structures that render our research experiences, that serve to produce new imaginaries, new spatial and temporal forms?

Addressing these concerns, published articles are the outcome of a research workshop that preceded the 2024 edition of the transmediale festival, in Berlin.[2] Participants developed their own research questions and provided peer feedback to each other, and prepared articles for a newspaper publication distributed as part of the festival.[3] In addition to established conventions of research development, they also engaged with the social and technical conditions of potential new and sustainable research practices – the ways it is shared and reviewed, and the infrastructures through which it is enabled. The distributed and collaborative nature of this process is reflected in the combinations of people involved – not just participants but also facilitators, somewhat blurring the lines between the two. Significantly, the approach also builds on the work of others involved in the development of the tools and infrastructures, and the short entry by Manetta Berends and Simon Browne acknowledges previous iterations of 'wiki-to-print' and 'wiki2print', which has in turn been adapted as 'wiki4print'.[4]

Figure 1: Content/Form workshop at Haus der Kulturen der Welt, Berlin, 29-31 January 2024.

Approaching the wiki as an environment for the production of collective thought encourages a type of writing that comes from the need to share and exchange ideas. An important principle here is to stress how technological and social forms come together and encourage reflection on organisational processes and social relations. As Stevphen Shukaitis and Joanna Figiel have put it in “Publishing to Find Comrades”: “The openness of open publishing is thus not to be found with the properties of digital tools and methods, whether new or otherwise, but in how those tools are taken up and utilized within various social milieus.”

Using MediaWiki software and web-to-print layout techniques, the experimental publication tool/platform wiki4print has been developed as part of a larger infrastructure for research and publishing called ‘ServPub’,[5] a feminist server and associated tools developed and facilitated collectively by grassroot tech collectives In-grid[6] and Systerserver.[7] It is a modest attempt to circumvent academic workflows and conflate traditional roles of writers, editors, designers, developers alongside the affordances of the technologies in use, allowing participants to think and work together in public. As such, our claim is that such an approach transgresses conventional boundaries of research institutions, like a university or an art school, and underscores how the infrastructures of research, too, are dependent on maintenance, care, trust, understanding, and co-learning.

Figure 2: Raspberry pi server used for the Content/Form workshop and newspaper publication. See https://servpub.net/.

These principles are apparent in the contribution of Denise Sumi who explores the pedagogical and political dimensions of two 'pirate' projects: an online shadow library that serves as an alternative to the ongoing commodification of academic research, and another that offers learning resources that address the crisis of care and its criminalisation under neoliberal policies. The phrase "technopolitical pedagogies" is used to advocate for the sharing of knowledge, and to use tools to provide access to information and restrictive intellectual property laws. Further examples of resistance to dominant media infrastructures are provided by Kendal Beynon, who charts the historical parallel between zine culture and DIY computational publishing practices, including the creation of personal homepages and feminist servers, as spaces for identity formation and community building. Similarly drawing a parallel, Bilyana Palankasova combines online practices of self-documentation and feminist art histories of media and performance to expand on the notion of "content value" through a process of innovation and intra-cultural exchange.

New forms of content creation are also examined by Edoardo Biscossi, who proposes "platform pragmatics" as a framework for understanding collective behaviour and forms of labour within media ecosystems. These examples of content production are further developed by others in the context of AI and large language models (LLMs). Luca Cacini characterises generative AI as an "autophagic organism", akin to the biological processes of self-consumption and self-optimization. Concepts such as “model collapse” and "shadow prompting" demonstrate the potential to reterritorialize social relations in the process of content creation and consumption. Also concerning LLMs, Pierre Depaz meticulously uncovers how word embeddings shape acceptable meanings in ways that resemble Foucault’s disciplinary apparatus and Deleuze’s notion of control societies, as such restricting the lexical possibilities of human-machine dialogic interaction. This delimitation can be also seen in the ways that electoral politics is now shaped, under the conditions of what Asker Bryld Staunæs and Maja Bak Herrie refer to as a "flat reality". They suggest "deep faking" leads to a new political morphology, where formal democracy is altered by synthetic simulation. Marie Naja Lauritzen Dias argues something similar can be seen in the mediatization of war, such as in the case of a YouTube video of a press conference held in Gaza, where evidence of atrocity co-exists with its simulated form. On the other hand, rather than seek to reject these all-consuming logics of truth or lies, Esther Rizo Casado points to artistic practices that accelerate the hallucinatory capacities of image-generating AI to question the inherent power dynamics of representation, in this case concerning gender classifications. Using a technique called "xeno-tuning", pre-trained models produce weird representations of corporealities and hegemonic identities, thus making them transformative and agential. Falsifications of representation become potentially corrective of historical misrepresentation.

Returning to the workshop format, Mateus Domingos describes an experimental wi-fi network related to the feminist methodologies of ServPub. This last contribution exemplifies the approach of both the workshop and publication, drawing attention to how constituent parts are assembled and maintained through collective effort. This would not have been possible without the active participation of not only those mentioned to this point, the authors of articles but also the grassroots collectives who supported the infrastructure, and the wider network of participant-facilitators (which includes Rebecca Aston, Emilie Sin Yi Choi, Rachel Falconer, Mara Karagianni, Mariana Marangoni, Martyna Marciniak, Nora O' Murchú, ooooo, Duncan Paterson, Søren Pold, Anya Shchetvina, George Simms, Winnie Soon, Katie Tindle, and Pablo Velasco). In addition, we appreciate the institutional support of SHAPE Digital Citizenship and Digital Aesthetics Research Center at Aarhus University, the Centre for the Study of the Networked Image at London South Bank University, the Creative Computing Institute at University of the Arts, London, and transmediale festival for art and digital culture, Berlin. This extensive list of credits of human and nonhuman entities further underscores how form/content come together, allowing one to shape the other, and ultimately the content/form of this publication.

Notes

  1. With this in mind, the previous issue of APRJA used the term "minor tech", see https://aprja.net//issue/view/10332.
  2. Details of tranmediale 2024 can be found at https://transmediale.de/en/2024/sweetie. Articles are derived from short newspaper articles written during the workshop.
  3. The newspaper can be downloaded at https://cc.vvvvvvaria.org/wiki/File:Content-Form_A-Peer-Reviewed-Newspaper-Volume-13-Issue-1-2024.pdf
  4. See the article that follows for more details on this history, also available at https://cc.vvvvvvaria.org/wiki/Wiki-to-print.
  5. For more information on ServPub, see https://servpub.net/.
  6. In-grid, https://www.in-grid.io/
  7. Systerverver, https://systerserver.net/

Works cited

Andersen, Christian, and Geoff Cox, eds., A Peer Reviewed Journal About Minor Tech, Vol. 12, No. 1 (2023), https://aprja.net//issue/view/10332.

Shukaitis, Stevphen, and Joanna Figiel, "Publishing to Find Comrades: Constructions of Temporality and Solidarity in Autonomous Print Cultures," Lateral Vol. 8, No. 2 (2019), https://csalateral.org/issue/8-2/publishing-comrades-temporality-solidarity-autonomous-print-cultures-shukaitis-figiel/.

Biographies

Christian Ulrik Andersen is Associate Professor of Digital Design and Information Studies at Aarhus University, and currently a Research Fellow at the Aarhus Institute of Advanced Studies.

Geoff Cox is Professor of Art and Computational Culture at London South Bank University, Director of Digital & Data Research Centre, and co-Director of Centre for the Study of the Networked Image.

Manetta Berends & Simon Browne

About wiki-to-print

This journal is made with wiki-to-print, a collective publishing environment based on MediaWiki software[1], Paged Media CSS[2] techniques and the JavaScript library Paged.js[3], which renders a preview of the PDF in the browser. Using wiki-to-print allows us to work shoulder-to-shoulder as collaborative writers, editors, designers, developers, in a non-linear publishing workflow where design and content unfolds at the same time, allowing the one to shape the other.

Following the idea of "boilerplate code" which is written to be reused, we like to think of wiki-to-print as a boilerplate as well, instead of thinking of it as a product, platform or tool. The code that is running in the background is a version of previous wiki-printing instances, including:

  • the work on the Diversions[4] publications by Constant[5] and OSP[6]
  • the book Volumetric Regimes[7] by Possible Bodies[8] and Manetta Berends[9]
  • TITiPI's[10] wiki-to-pdf environments[11] by Martino Morandi
  • Hackers and Designers'[12] version wiki2print[13] that was produced for the book Making Matters[14]

So, wiki-to-print/wiki-to-pdf/wiki2print is not standalone, but part of a continuum of projects that see software as something to learn from, adapt, transform and change. The code that is used for making this journal is released as yet another version of this network of connected practices[15].

This wiki-to-print is hosted at CC[16] (creative crowds). While moving from cloud to crowds, CC is a thinking device for us how to hand over ways of working and share a space for publishing experiments with others.

Notes

Denise Helene Sumi

On Critical "Technopolitical Pedagogies"

On Critical "Technopolitical Pedagogies"

Learning and Knowledge Sharing with Public Library/Memory of the World and syllabus ⦚ Pirate Care

Abstract

This article explores the pedagogical and political dimensions of the projects Public Library/Memory of the World and syllabus ⦚ Pirate Care. Public Library/Memory of the World (2012–ongoing) by Marcell Mars and Tomislav Medak serves as an online shadow library in response to the ongoing commodification of academic research and threats to public libraries. syllabus⦚ Pirate Care (2019), a project initiated by Valeria Graziano, Mars, and Medak, offers learning resources that address the crisis of care and its criminalisation under neoliberal policies. The article argues that by employing "technopolitical pedagogies" and advocating the sharing of knowledge, these projects enable forms of practical orientation in a complex world of political friction. They use network technologies and open-source tools to provide access to information and support civil disobedience against restrictive intellectual property laws. Unlike other scalable "pirate" infrastructures, these projects embrace a nonscalable model that prioritises relational, context-specific engagements and provides tools for the creation of similar infrastructures. Both projects represent critical pedagogical interventions, hacking the monodimensional tendencies of educational systems and library catalogues, and produce commoner positions.

What Is the Purpose of Pedagogy? Or How to Compose Content

Every human lives in a world. Worlds are composed of contents, the identification of those contents, and by the configuration of content relations within — semantically, operationally and axiologically. [...] The identification of the contents of a world and its relational configuration is what establishes frames of reference for practical orientation. (Reed 1)

This quote, taken from the opening words of Patricia Reed's essay "The End of a World and Its Pedagogies" offers a good entry point for what will be discussed below in relation to the two projects Public Library/Memory of the World and syllabus ⦚ Pirate Care. Public Library/Memory of the World is an online shadow library initiated in 2012 by Marcell Mars and Tomislav Medak in a situation where knowledge and academic research was, and still is, largely commodified and followed the logics of property law, when public libraries were threatened by austerity measures and existing shadow libraries were increasingly threatened by lawsuits (Mars and Medak 48). As a continuation of Memory of the World, and as a response to a period of neoliberal politics in which care is "increasingly defunded, discouraged and criminalised," syllabus ⦚ Pirate Care was initiated in 2019 by Valeria Graziano, Mars and Medak (syllabus ⦚ Pirate Care 2). It is an online syllabus that provides information on initiatives that counter the criminalisation of care in a neoliberal system. The following text will discuss the two projects and argue that they produce and distribute content that can be linked back to their specific form of "technopolitical pedagogy" and commoning of knowledge, thus producing a specific practical and political orientation in the world (syllabus ⦚ Pirate Care 7). Practical orientation, with reference to Reed, is understood as a method of situating oneself within a complex and shifting reality and paying particular attention to the vectors and structures of specific relations and their activations. Practical orientation requires an active position in the development of new frameworks. If we understand content and information retrieval as a political project in itself (Kolb and Weinmayr 1), then what content we are able to access and how we are able to access is matters in relation to how worlds are composed.

These two projects were specifically chosen because they differ from similar "pirate" infrastructures such as sci-hub or library genesis in that they operate differently and are relatively small in scale. In her text "On Nonscalability", Anna Lowenhaupt Tsing argues for the development of a "theory of nonscalability", which she defines as the negative of scalability (Lowenhaupt Tsing 507). "Scalable projects"," she writes, "are those that can expand without changing" (Lowenhaupt Tsing 507). While she refers to relationships as "potential vectors of transformation" (Lowenhaupt Tsing 507), the content of both Public Library/Memory of the World and Syllabus ⦚ Pirate Care is presented with a strong reference to the librarians, authors, activists, and initiatives assembled, thus potentially allowing for a relationship with the non-interchangeable people behind the projects or book collections. Both projects provide not only the content, but also the tools to recreate such infrastructures/forms in different contexts and are therefore non static. The two projects differ from similar pirate structures in that they are not scalable in their current form and provide toolkits for recreating similar infrastructures. They are not a project of "uniform expansion", but capable of forming relations of care rather than modes of alienation (Lowenhaupt Tsing 507).

In the essay quoted above, Reed discusses the concept of worlds (actual worlds or models) as frameworks of inhabitation, shaped by content-related relations that create practical orientations. She argues that the current globalised world is characterised by monodimensional tendencies, leading to a "making-small of worlds" and a reduction of content and diversity (a similar argument to that of Lowenhaupt Tsing regarding the modern project of scalability in the sense of growth and expansion). This tendency to make "small worlds" is a familiar metaphor for describing the topologies of network technologies (Watts). One guiding question of this essay is how projects such as Public Library/Memory of the World and syllabus ⦚ Pirate Care can counteract this making-small of worlds. Reed explores how worlds endure through their ability to absorb friction but come to an end when they fail to do so. She points to a disparity created by what she calls the "insuppressible friction" of "Euromodern" and "globalising practices" with "the planetary" and suggests that at the end of a world, when frictions are no longer absorbed, pedagogies must attune by adapting to existing configurations and imagining other worlds (Reed 3). Although Reed focuses on the "insuppressible friction" around the disparity of the Euromodern and the planetary, I intend to apply her argument that pedagogies must attune to learn to absorb the disparity created by frictions otherwise — namely to a state where the disparity for a political desire for a monodimensional world order, a pluriversal world order, or one that understands the world as complex "dynamic cultural fabric" becomes irrepressible (Rivera Cusicanqui 107). What is the purpose of any pedagogy if not to absorb these very political frictions?

Then, what is the purpose of pedagogy, of a school, of a university? Gary Hall, critical theorist and media philosopher, answers this question as follows:

One of the purposes of a university is to create a space where society's common sense ideas can be examined and interrogated, and to act as a testing ground for the development of new knowledges, new subjectivities, new practices and new social relations of the kind we are going to need in the future, but which are often hard — although not impossible — to explore elsewhere. (Hall 169)

This essay is written at a time when pro-Palestinian protests on US campuses are spreading to European and Middle Eastern universities. According to the Crowd Counting Consortium, more than 150 pro-Palestinian demonstrations took place on US campuses between April 17 and 30, 2024. The same Washington Post article that reported these figures affirmed that state, local, and campus police, often in riot gear, monitored or dispersed crowds on more than eighty campuses (Rosenzweig-Ziff et al.). While their presence was often requested by university administrations themselves, by the early morning of May 17, 2024, more than 2,900 people had been arrested at campus protests in the US (Halina et al.). It is in this climate at universities, Hall’s statement quoted above about the university as a space for testing new social relations and new subjectivities needs to be critically reconsidered, as well as the university, its libraries, and archives as citadels of knowledge. Another level on which this text argues in favor of learning from and with projects such as Public Library/Memory of the World and syllabus ⦚ Pirate Care and its everyday and critical pedagogies , is the growing discussion about the decolonisation of libraries in the Global North; about how knowledge has been catalogued, collected, and stored in these library catalogues; about what socially and historically generated orders and hierarchies underlie them, and what content has been left out by which authors (Kolb and Weinmayr 1).

If knowledge — including academic research, books, and papers being produced by scholars and researchers — is to circulate in a multitude of ways, then ways of sharing this knowledge and spaces for learning should be supported, enhanced, and presented alongside an institutional setting. Learning and producing knowledge from within institutions should not exclude learning from and sharing with the periphery. Any form of knowledge can never be entirely public or private but must involve a variety of "modes of authorship, ownership and reproduction", as Hall writes (161). These distributed modes of authorship, ownership, and reproduction protect a society from knowledge being censored or even destroyed — and so worlds, histories, and biographies can continue to flourish and be discussed from different perspectives. In addition to state educational institutions such as universities, libraries, and state archives, other pillars within societies are needed to preserve and disseminate knowledge.

In the book School: A Recent History of Self-Organized Art Education, Sam Thorne has collected conversations that feature projects that enable alternative pedagogical practices or “radical education” outside of large state institutions, such as the Silent University in Boston; the School for Engaged Art in St. Petersburg/Berlin, associated with the collective and magazine Chto Delat; or the Public School founded by Sean Dockray and Fiona Whitton, associated with the platform AAAARG.org, among many others (Thorne 26). With his contribution to the field, Thorne gathers examples of "flexible, self-directed, social and free" and often "small", "non-standardized" programmes and formats for general education (Thorne 31ff). Within this trajectory of self-organised educational platforms and critical/radical pedagogies, the focus on Public Library/Memory of the World and syllabus⦚ Pirate Care may offer a response to the increasingly repressive climate within public educational institutions, the critical review of existing library catalogues, and the "circuits of academic publishing" still largely controlled by these same institutions alongside a profitable academic publishing industry (Mars and Medak 60). Unlike most of the examples in School, the two examples I want to discuss are defined by the fact that they are not site-specific, but make use of network technologies and infrastructures, and therefore offer a reassessment of the question of how to use the possibilities of knowledge circulation offered by technological networks, thinking alongside questions of authorship, ownership, and reproduction, as well as the maintenance and care of knowledge.

Both projects will be discussed as examples of "techno-cultural formulations" (Goriunova 44) that embed critical pedagogies and not only address the current regulations of the circulation of knowledge and the criminalisation of care and solidarity that coinsides with it, but also offer tools and strategies to oppose these mechanisms individually and collectively. Goriunova's notion of "techno-cultural formations" refers to the ways in which technology and cultural practices co-evolve and shape one another and how these interactions produce new forms of culture and social organization, not falling into the narrative of techno-determinism. While “techno-cultural formations” play a crucial role in how knowledge is being navigated or retrieved, this essay argues, that it is all the more important to pay attention to critical pedagogies within techno-cultural formations as well as the content-form relation of certain formations. In order to better understand how techno-cultural formations shape social organisation differently from techno-determinism, the next part will make a small excursion to describe how distributed network technologies have been used in the last two decades to further confuse practical orientation, before returning to the actual projects.

From the Citadel to Calibre: Becoming an Autonomous Amateur Librarian

The push to disorient and capitalise on the "hyper-emotionalism of post-truth politics" (Hall 172), together with the rise of the digital platform economy, where companies such as Google or Amazon connect users and producers and extract value from the data generated by their interactions, transforming labour and further concentrating capital and power (Srnicek), has become increasingly influential in the politics of the last two decades. These developments have further confused the practical orientation and identification of information and content, and created political frictions. What became known as the Cambridge Analytica data scandal revealed to a wider public that the populist authoritarian right was exploiting the possibilities of network and communication technologies for its own ends. What Alexander Galloway observed in his 2010 essay "Networks" became clear:

Distributed networks have become hegemonic only recently, and because of this it is relatively easy to lapse back into the thinking of a time when networks were disruptive of power centers, when the guerilla threatened the army, when the nomadic horde threatened the citadel. But this is no longer the case. The distributed network is the new citadel, the new army, the new power. (Galloway 290)

In the same essay, Galloway points out the inherent contradictions within networked systems — how they simultaneously enable open access and impose new forms of regulation, thus he called for a "critical theory" when applying the network form (Galloway 290). Although, in 2010, Galloway was still very much focusing on distributed networks as the new citadel, when in fact it was the scale-free networks that a few years later made it possible for the Facebook–Cambridge Analytica data scandal to fully unfold at the scale it did. In her award-winning article, investigative journalist Carole Cadwalladr reveals the mechanisms and scale by which data analytics firm Cambridge Analytica harvested data from individual Facebook users to supply to political campaigns, including Donald Trump's 2015 presidential campaign and the Brexit campaign. Cadwalladr compares the massive scandal to a "massive land grab for power by billionaires via our data". She wrote: "Whoever owns this data owns the future".

In their text "System of a Takedown" on circuits of academic publishing, Mars and Medak remind us that the modern condition of land grabbing and that of intellectual property, and thus copyright for digital and discrete data, have the same historical roots in European absolutism and early capitalism in the seventeenth and eighteenth centuries. Intellectual labour in the age of mechanical reproduction, they say, has been given an unfortunate metaphor: "A metaphor modeled on the scarce and exclusive character of property over land." (Mars and Medak 49) Mars and Medak refer to a complex interplay between capital flows, property rights, and the circuits of academic publishing. In their text, they essentially criticise what they call the “oligopoly” of academic publishing. Mars and Medak state that in 2019, academic publishing was a $10 billion industry, 75 percent of which was funded by university library subscriptions. They go on to show that the major commercial publishers in the field make huge profit margins, regularly over 30 percent in the case of Reed Elsevier, and not much less in the case of Taylor and Francis, Springer, Wiley-Blackwell, and others. Mars and Medak argue that publishers maintain control over academic output through copyright and reputation mechanisms, preventing alternatives such as open access from emerging. They suggest that this control perpetuates inequality and limits access to knowledge (Mars and Medak 49). Mars and Medak follow a trajectory in their critique of the regulation of the circulation of knowledge. In his 2008 "Guerrilla Open Access Manifesto", programmer and activist Aaron Swartz criticised the academic publishing system and advocated civil disobedience to oppose these mechanisms:

The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitized and locked up by a handful of private corporations. [...] It's outrageous and unacceptable. [...] We need to take information, wherever it is stored, make our copies and share them with the world. We need to take stuff that's out of copyright and add it to the archive. We need to buy secret databases and put them on the Web. We need to download scientific journals and upload them to file sharing networks.” (Swartz 2008)

A few years prior to the publication of the "Guerilla Open Access Manifesto", the "Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities" was presented. The declaration points out that the internet offers an opportunity to create a global and interactive repository of scientific knowledge and cultural heritage, which could be distributed through the means of networking. The declaration calls on policymakers, research institutions, funding agencies, libraries, archives, and museums to consider its call to action and to implement open-access policies. More than twenty years later, access to this particular system that legally circulates academic knowledge remains accessible only to a few privileged students, professors, and university staff. The Universal Declaration of Human Rights' call for equal access to education is in no way supported by a system in which knowledge is still treated as a scarcity rather than a common good. Under these continuing conditions, Mars and Medak argue that courts, constrained by viewing intellectual property through a copyright lens, have failed to reconcile the conflict between access to knowledge and fair compensation for intellectual labour. Instead, they have overwhelmingly supported the commercial interests of major copyright industries, further deepening social tensions through the commodification of knowledge in the age of digital reproduction (Mars and Medak 2019). For this reason, Mars and Medak suggest that copyright infringement (in relation to academic publishing circuits) is not a matter of illegality, but of "legitimate action" (Mars and Medak 55). They argue that a critical mass of infringement is necessary for such acts to be seen as legitimate expressions of civil disobedience. The author of Piracy: The Intellectual Property Wars from Gutenberg to Gates, Adrian Johns, writes that

"information has become a key commodity in the globalized economy and that piracy today goes beyond the theft of intellectual property to affect core aspects of modern culture, science, technology, authorship, policing, politics, and the very foundations of economic and social order. [...] That is why the topic of piracy causes the anxiety that it so evidently does. [...] The pirates, in all too many cases, are not alienated proles. Nor do they represent some comfortingly distinct outside. They are us. (Johns 26)

On his personal blog, Mars explains how to become an autonomous online librarian by sharing books using network technologies to contribute to critical mass. Calibre, an open-source software, allows you to create an individual database for a book/PDF collection (Mars). Calibre semiautomatically collects metadata from online sources. Each individual collection can be shared in a few simple steps when connected to a LAN (local area network). The entire collection can also be made available to others over the internet (outside the LAN). This is a bit more complex, but easy to learn and use. These mechanisms — a database and some basic knowledge of how to use networking technologies — form the basis of contributing to systems like the Public Library/Memory of the World. Database software like Calibre, networking technologies and tutorials like Mars's, as well as the maintenance of the website itself, make it possible to become an autonomous amateur librarian: knowledge can be made freely available by the many for the many. A project like Public Library/Memory of the World creates a potential for decentralisation, bringing together materials and perspectives that are not already validated or authorised by the formalised environment of an institutional library (Kolb and Weinmayr 2), but allowing for "flexible, self-directed, social and free" and many "small", "non-standardised", and independent libraries and learning platforms, like those presented by Thorne (31). As of May 23, 2024, the library currently offers access to 158,819 books, available in PDF or EPUB format, maintained and offered by twenty-six autonomous librarians, that you could potentially contact in one way or another.

Learning with Syllabi: Becoming a "Subject Position"

While Public Library/Memory of the World is often referenced in discussions of the commons, open access, online piracy, and shadow libraries (Sollfrank, Stalder, and Niederberger), syllabus ⦚ Pirate Care can be situated in the political tradition of radical writing and publishing in a new media environment (Dean et al.). Alongside this tradition, the initiators Graziano, Mars, and Medak claim that the project is in fact a continuation of the shadow library and its particular ethics and is using pedagogy as an "entry point" (syllabus ⦚ Pirate Care 4). Inspired by "online syllabi generated within social justice movements" such as #FergusonSyllabus (2014), #BlkWomenSyllabus (2015), #SayHerNameSyllabus (2015), #StandingRockSyllabus (2016), or #BLMSyllabus (2015/2016) (Learning with Syllabus), syllabus ⦚ Pirate Care serves as a transnational research project involving activists, researchers, hackers, and artists concerned with the "crisis of care" and the criminalisation of solidarity in "neoliberal politics" (syllabus ⦚ Pirate Care 117). After an introduction to the syllabus and its content, summaries, reading lists, and resources from the introductory sessions "Situating Care", "The Crisis of Care and its Criminalisation", "Piracy and Civil Disobedience, Then and Now", as well as guidance for exercises, are provided. Each session/section is accompanied by an exhaustive list of references and resources, as well as links to access the resources. This is followed by more detailed insights into civic and artistic projects and activist practices such as "Sea Rescue as Care", "Housing Struggles", "Transhackfeminism", and "Hormones, Toxicity and Body Sovereignty", to name but a few. Regarding its specific pedagogies and "technopolitics", it explains that:

We want the syllabus to be ready for easy preservation and come integrated with a well-maintained and catalogued collection of learning materials. To achieve this, our syllabus is built from plaintext documents that are written in a very simple and human-readable Markdown markup language, rendered into a static HTML website that doesn’t require a resource-intensive and easily breakable database system, and which keeps its files on a git version control system that allows collaborative writing and easy forking to create new versions. Such a syllabus can be then equally hosted on an internet server and used/shared offline from a USB stick. (syllabus ⦚ Pirate Care 5)

In addition to the static website (built with Hugo), it is possible to generate a PDF of the entire syllabus with a single click (this feature is built into the website using Paged.js). Some of the topics are linked to a specific literature repository on the shadow library Public Library/Memory of the World. The curriculum lives on a publishing platform, Sandpoints, developed by Mars. Sandpoints enables collaborative writing, remixing, and maintenance of a catalogue of learning resources as "concrete proposals for learning" (syllabus ⦚ Pirate Care 4). The source code for the software is made available via GitHub, and all "original writing" within the syllabus is released "under CC0 1.0 Universal (CC0 1.0), Public Domain Dedication, No Copyright" and users are invited to use the material in any way (syllabus ⦚ Pirate Care 6). The arrangement of this specific form of "Pirate Care" — an open curriculum linked to a shadow library, built with free software, together with the call for collective action — produces and distributes activities and content that can be linked back to the specific form of solidarity and ethics that the project is concerned with.

The specific technopolitical pedagogies of the two projects discussed do indeed apply a critical theory when using the network form, thus allowing for a practical orientation (especially when engaging with techno-cultural formulations.) They do so by exploring the specific content-form relations of research practices and their tools themselves; by advocating for the implementation of care in the network form; and by applying methodologies for commoning for enabling transversal knowledge exchange. They do so while embracing the opportunities offered by network technologies, calling for "technologically-enabled care and solidarity networks" (syllabus ⦚ Pirate Care 2). These systems are in place to support the use of experimental web publishing tools. By distributing information outside dominant avenues, Public Library/Memory of the World and syllabus⦚ Pirate Care continue to challenge the "unusable politics" (transmediale) and "unjust laws" (Swartz) that continue to produce harmful environments, offering a reassessment of the inherently violent dynamics of the realities of Publishing (with a capital P) (Dean et al.), the circulation of information as a commodity, and imperialist logics of structural discrimination. As a model for commoning knowledge in the form of a technically informed care infrastructure, the project not only enables its users to engage with the syllabus and library as a curriculum, but also to build and maintain similar infrastructures. As an alternative publishing infrastructure, these projects continue to have an impact on politics, pedagogies, and governance and can serve as models to carefully institute. In their 2022 publication "Infrastructural Interactions: Survival, Resistance and Radical Care", the Institute for Technology in the Public Interest (TITiPI) explore how big tech continues to intervene in the public realm. Therefore, TITiPI asks: "How can we attend to these shifts collectively in order to demand public data infrastructures that can serve the greater public good?" (TITiPI 2022)

Projects such as Public Library/Memory of the World and syllabus ⦚ Pirate Care produce what Goriunova — with reference to the conceptual persona in Gilles Deleuze and Félix Guattari’s “What is Philosophy?” — has called a "subject position", one that is "abstracted from the work and structures of shadow libraries, repositories and platforms" and that operates in the world in relation to subjectivities (Deleuze and Guattari; Goriunova 43). Goriunova's subject position is one that is radically different from what Hall recalls when he speaks of new subjectivities being formed within universities. In relation to making and using and learning with a shadow library like Public Library/Memory of the World or a repository like syllabus ⦚ Pirate Care, Goriunova states:

They [the subject positions] are formed as points of view, conceptual positions that create a version of the world with its own system of values, maps of orientation and horizon of possibility. A conceptual congregation of actions, values, ideas, propositions creates a subject position that renders the project possible. Therefore, on the one hand, techno-cultural gestures, actions, structures create subject positions, and on the other, the projects themselves as cuts of the world are created from a point of view, from a subject position. This is neither techno-determinism, when technology defines subjects, nor an argument for an independence of the human, but for a mutual constitution of subjects and technology through techno-cultural formulations. (Goriunova 43)

When one actively engages with network technologies, shadow libraries, repositories, and independent learning platforms, a subject position is constantly abstracted and made manifest. I would add to that, when one actively engages with network technologies, shadow libraries, repositories, and independent learning platforms a "commoner position" is constantly abstracted and made manifest. Galloway uses the Greek "Furies" as a metaphor for the "operative divinity" in the anti-hermeneutic tradition of networks and calls for a "new model of reading [...] that is not hermeneutic in nature but instead based on cybernetic parsing, scanning, rearranging, filtering, and interpolating" (Galloway 290). The Furies, which occur above all when human justice and the law fail somewhere, are suddenly reminiscent of the figure of the pirate that disobeys "unjust legal" and "social rules" (Graziano, Mars 141). The question remains: How can pedagogy attune so that it can create commoner positions that are willing to take on the work of the furies and the pirates, the work of parsing, scanning, rearranging, filtering, and interpolating? Who owns and shares the content that composes our dialogues and worlds?

Works cited

Cadwalladr, Carole. "The Great British Brexit Robbery: How Our Democracy Was Hijacked." The Guardian, May 7, 2017. https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy. Accessed May 22, 2024.

Cusicanqui, Silvia Rivera. "Ch'ixinakax utxiwa: A Reflection on the Practices and Discourses of Decolonization." South Atlantic Quarterly (2012) 111 (1): 95–109.

Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities, 2003. https://openaccess.mpg.de/Berlin-Declaration. Accessed May 22, 2024.

Bennet, Halina, Olivia Bensimon, and Anna Betts, et al. "Where Protesters on U.S. Campuses Have Been Arrested or Detained." New York Times, https://www.nytimes.com/interactive/2024/us/pro-palestinian-college-protests-encampments.html. Accessed May 22, 2024.

Dean, Jodi, Sean Dockray, Alessandro Ludovico, Pauline van Mourik Broekman, Nicholas Thoburn, and Dimitry Vilensky. "Materialities of Independent Publishing: A Conversation with AAAAARG, Chto Delat?, I Cite, Mute, and Neural." New Formations 78 (2013): 157–78.

Deleuze, Gilles, and Félix Guattari. What is Philosophy?, translated by Hugh Tomlinson and Graham Burchell. Columbia University Press, 1994.

Galloway, Alexander R. "Networks." In Critical Terms for Media Studies, edited by W. J. T. Mitchell and Mark B. N. Hansen, 280–96. University of Chicago Press, 2010.

Goriunova, Olga. "Uploading Our Libraries: The Subjects of Art and Knowledge Commons." In Aesthetics of the Commons, edited by Cornelia Sollfrank, Felix Stalder, and Shusha Niederberger, 41–62. Diaphanes, 2021.

Graziano, Valeria, Marcell Mars, and Tomislav Medak. "Learning from #Syllabus." In State Machines: Reflections and Actions at the Edge of Digital Citizenship, Finance, and Art, edited by Yiannis Colakides, Marc Garrett, and Inte Gloerich, 115–28. Institute of Network Cultures, 2019.

———. "When Care Needs Piracy: The Case for Disobedience in Struggles Against Imperial Property Regimes." In Radical Sympathy, edited by Brandon LaBelle, 139–56. Errant Bodies Press, 2022.

Graziano, Valeria, Marcell Mars, and Tomislav Medak, eds. syllabus ⦚ Pirate Care. 2019. https://syllabus.pirate.care. Accessed May 22, 2024.

Hall, Gary. "Postdigital Politics." In Aesthetics of the Commons, edited by Cornelia Sollfrank, Felix Stalder, and Shusha Niederberger, 153–80. Diaphanes, 2021.

Johns, Adrian. Piracy: The Intellectual Property Wars from Gutenberg to Gates. University of Chicago Press, 2009.

Kolb, Lucie, and Eva Weinmayr. "Teaching the Radical Catalog — A Syllabus." Arbido: Dekolonialisierung von Archiven 1 (2024). https://arbido.ch/de/ausgaben-artikel/2024/dekolonialisierung-von-archiven-decolonisation-des-archives/teaching-the-radical-catalog-a-syllabus. Accessed May 22, 2024.

Lowenhaupt Tsing, Anna. "On Nonscalability: The Living World Is Not Amenable to Precision-Nested Scales." Common Knowledge, Vol. 18, Issue 3, Fall 2012, 505-524.

Mars, Marcell. "Let’s Share Books." Blog post. January 30, 2011. https://blog.ki.ber.kom.uni.st/lets-share-books. Accessed May 22, 2024.

———. "Public Library/Memory of the World: Access to Knowledge for Every Member of Society." 32C3, CCC Congress, 2015. https://media.ccc.de/v/32c3-7279-public_library_memory_of_the_world. Accessed May 22, 2024.

Mars, Marcell, and Tomislav Medak. "System of a Takedown: Control and De-commodification in the Circuits of Academic Publishing." In Archives, edited by Andrew Lison, Marcell Mars, Tomislav Medak, and Rick Prelinger, 47–68. Meson Press, 2019.

Memory of the World, eds. Guerrilla Open Access — Memory of the World. Post Office Press, Rope Press, and Memory of the World, 2018.

Reed, Patricia. "The End of a World and Its Pedagogies." Making & Breaking 2 (2021). https://makingandbreaking.org/article/the-end-of-a-world-and-its-pedagogies. Accessed May 22, 2024.

Rosenzweig-Ziff, Dan, Clara Ence Morse, Susan Svrluga, Drea Cornejo, Hannah Dormido, and Júlia Ledur. "Riot Police and Over 2,000 Arrests: A Look at 2 Weeks of Campus Protests." Washington Post, May 3, 2024. https://www.washingtonpost.com/nation/interactive/2024/university-antiwar-campus-protests-arrests-data/. Accessed May 22, 2024.

Sollfrank, Cornelia, Felix Stalder, and Shusha Niederberger, eds. Aesthetics of the Commons. Diaphanes, 2021.

Srnicek, Nick. Platform Capitalism. Polity Press, 2016.

Swartz, Aaron. "Guerilla Open Access Manifesto." July 2008. https://archive.org/details/GuerillaOpenAccessManifesto. Accessed May 22, 2024.

The Institute for Technology in the Public Interest, Helen V. Pritchard, and Femke Snelting, eds. Infrastructural Interactions: Survival, Resistance and Radical Care, 2022. http://titipi.org/pub/Infrastructural_Interactions.pdf. Last accessed May 22, 2024.

Thorne, Sam. School: A Recent History of Self-Organized Art Education. Sternberg Press, 2017.

transmediale 2024. https://transmediale.de/de/2024/sweetie. Accessed May 22, 2024.

Watts, Duncan. Small Worlds: The Dynamics of Networks between Order and Randomness. Princeton University Press, 2003.

Biography

Denise Helene Sumi (she/her) is a curator, editor, and researcher. She works as a doctoral researcher at the Peter Weibel Institute for Digital Cultures at the University of Applied Arts in Vienna and has been the coordinator of the Digital Solitude program at the international and interdisciplinary artist residency Akademie Schloss Solitude, Stuttgart, from 2019 to 2024. Her research focuses on the mediation of artistic experimental directions that establish and maintain technology-based relationships, lateral knowledge exchange, and collective approaches. Sumi was editor in chief of the Solitude Journal and is cofounder of the exhibition space Kevin Space, Vienna. Her writing and interviews have been published in springerin, Camera Austria, Spike Art Quarterly, Solitude Journal, Solitude Blog, and elsewhere.

Kendal Beynon

Zines and Computational Publishing Practices

Zines and Computational Publishing Practices

A Countercultural Primer

Abstract

This paper explores the parallels between historical zine culture and contemporary DIY computational publishing practices, highlighting their roles as countercultural movements within their own right. Both mediums, from zines of the 1990s to personal homepages and feminist servers, provide spaces for identity formation, community building, and resistance against mainstream societal norms. Drawing on Stephen Duncombe's insights into zine culture, this research examines how these practices embody democratic, communal ideals and act as a rebuttal to mass consumerism and dominant media structures. The paper argues that personal homepages and web rings serve as digital analogues to zines, fostering participatory and grassroots networks and underscores the importance of these DIY practices in redefining production, labour, and the role of the individual within cultural and societal contexts, advocating for a more inclusive and participatory digital landscape. Through an examination of both zines and their digital counterparts, this research reveals their shared ethos of authenticity, creativity, and resistance.

Introduction

In the contemporary sphere of machine-generated imagery, internet users seek a space to exist outside of the dominant society using the principles of do-it-yourself (DIY) ideology. Historically speaking, this phenomenon is hardly a novel movement, within the field of subcultural studies, we can see these acts of resistance through zine culture as early as the early 1950s. In Stephen Duncombe's seminal text, Notes from Underground: Zines and the Politics of Alternative Culture, he describes zines as "noncommercial, nonprofessional, small-circulation magazines which their creators produce, publish and distribute by themselves" (Duncombe 10-11). The content of these publications offers an insight into a radically democratic and communal ideal of a potential cultural and societal future. Zines are also an inherently political form of communication. Separating themselves from the mainstream, Elke Zobl states "the networks and communities that zinesters build among themselves are "undoubtedly political" and have “potential for political organization and intervention" (Zobl 6). These feminist approaches to zinemaking allow zinemakers to link their own lived experience to larger communal contexts politicising their ideals within a wider social context, allowing space for alternative narratives and futures. In the book mentioned earlier, in an updated afterword for 2017, Duncombe explicitly states: "One could plausibly argue that blogs are just ephemeral (...) zines".

Continuing this train of thought, this paper puts forward the argument that certain computational publishing practices act as a digital counterpoint and parallel to their physical peer, the zine. Within the context of this research, computational publishing practices refers to personal homepages and self-sustaining internet communities, and feminist server practices. The use of the term 'computational publishing' refers to practices of self-publishing both on a personal and collaborative level, while also taking into consideration the open-source nature of code repositories within feminist servers. Both mediums of a countercultural movement, zines and DIY computational publishing practices offer a space to explore the formation of identity, the construction of networks and communities and also aim to reexamine and reconfigure modes of production and the role of labour within these amateur practices. This paper aims to chart the similarities and connections that link the two practices and explore how they occupy the same fundamental space in opposition to dominant society.

Personalising Identity

Zines are commonly crafted by individuals from vastly diverse backgrounds, but one thing that links them all is their self-proclaimed title of 'losers'. In adopting this moniker, zinesters identify themselves in opposition to mainstream society. Disenfranchised from the prescribed representation offered in traditional media forms, zinemakers operate within the frame of alienation to establish self as an act of defiance. As Duncombe writes, zines are a "haven for misfits" (22). Often marginalised in society, feeling as if their power and control over dominant structures is non-existent, these publications offer an opportunity to make themselves visible and stake a claim in the world through their personal experiences and individual interpretations of the society around them. A particular genre of zine called the personal zine, more commonly known as the 'perzine', is a type of zine that outweighs the subjective over the objective and places the utmost importance on personal interpretation. In other words, perzines aim to express pure honesty on the part of the zinemaker. This often is shown through the rebellion against polished and perfect writing styles in favour of the vernacular and handwritten. The majority of the content of perzines aims to narrate the personal and the mundane, recounting everyday stories as an attempt to shed light on the unspoken. Perzines are often referred to as "the voice of democracy" (Duncombe 29), a way to illuminate difference while also sharing common experiences of those living outside of society, all within the comfort of their bedroom.

Figure 1: Examples of perzines (personal zines) around the topic of quarantine, highlighting publications that express personal lived experience. Photograph courtesy of the Wellcome Collection.

With personalisation remaining at the forefront of zine culture as a way to highlight individuality and otherness, the personalisation of political beliefs makes up a large majority of the content present within perzines. In a bid to "collapse the distance between the personal self and the political world" (Duncombe 36), zinemakers highlight the relation between the concept of the 'everyday loser' and the wider political climate they are situated within. As stated, the majority of zinemakers operate outside of mainstream society, so by inserting their own beliefs into the wider political space, they are allowing the political to become personal. This achieves a rebuttal towards dominant institutions through active alienation, revealing the individual interpretation of policies present and situating them in a highly personalised context. This practice of personalising the political also links to the deep yearning zinemakers have for searching for and establishing authenticity within their publications. Authenticity in this context is described as the "search to live without artifice, without hypocrisy" (Duncombe 37). The emphasis is placed upon unfettered reactions that cut through the contrivances of society. This can often be seen in misspelled words, furious scribbling and haphazard cut-outs with the idea of professionalism and perfection being disregarded in favour of a more enthusiastic and raw output. The co-editor of Orangutan Balls, a zine published in Staten Island, only known as Freedom, speaks of this practice of creation with Duncombe, "professionalism – with its attendant training, formulaic styles, and relationship to the market – gets in the way of freedom to just 'express'" (38). There is a deliberate dissent between the ideas of a constructed and packaged identity by the incorporation of nonsensical elements that seek to be seen as an act of pure expression.

Figure 2: Cameron's World, a collection of stickers, gifs and styles from the era of homemade homepages in a cut & paste style, reminiscent of the cut & paste nature of zinemaking. (https://www.cameronsworld.net/)

As stated in the introduction to this paper, personal webpages and blogs stand in as the digital equivalent of a perzine. In the mid 1990s, a user's homepage served as an introduction to the creator of the site, employing the personal as a tool to relate to their audience. The content of personal pages, not unlike zines, contained personal anecdotes and narrated individual experiences of their cultural situation from the margins of society. While zines adopted cut-and-paste images and text as their aesthetic style, websites demonstrated their vernacular language through the cut-and-pasting of sparkling gifs and cosmic imagery as a form of personalisation. "To be blunt, it was bright, rich, personal, slow and under construction. It was a web of sudden connections and personal links. Pages were built on the edge of tomorrow, full of hope for a faster connection and a more powerful computer." (Lialina and Espenschied 19) Websites were prone to break or contain missing links which cemented the amateurish approach of the site owner. The importance fell on challenging the very web architectures that were in place, pushing the protocols to the limit to test boundaries as an act of resistance. This resistance is mentioned again by Olia Lialina in a 2021 blog post where she articulates to users interested in reviving their personal homepage: "Don't see making your own web page as a nostalgia, don't participate in creating the 'netstalgia' trend. What you make is a statement, an act of emancipation. You make it to continue a 25-year-old tradition of liberation." These homepage expressions can also be seen as a far cry from the template-based web blogs such as Wordpress, Squarespace or Wix which currently dominate the more standardised approach to web publishing in our digital landscape.

Figure 3: Customisation options in Second Life. Highlights the high level of customisation afforded which goes beyond basic physical characteristics such as eye colour, hair colour, etc.

Zines also act as a method of escapism or experimentation. Within the confines of the publication, the writers can construct alternative realities in which new means of identity can be explored. Echoing the cut-and-paste nature of a zine, zinemakers collage fragments of cultural ephemera in a bid to build their sense of self, if only for the duration of the construction of the zine itself. These fragments propose the concept of the complexity of self, separate from the neatly catalogued packages prescribed by dominant ideals of contemporary society. Zines, instead, display these multiplicities as a way to connect with their audience, placing emphasis on the flexibility of identity as opposed to something fixed and marketable.

The search for self is prevalent among contemporary internet users, however, usually this takes the form of avatars or interest-based web forums. Avatars, in particular, allow the user to collage identifying features in order to effectively ‘build’ the body they feel most authentic within. Echoing back to the search for authenticity: "What makes their identity authentic is that they are the ones defining it" (Duncombe 45). Zinemakers aim to use zines as a mean to recreate themselves away from the strict confines of mainstream society and instead occupy an underground space in which this can be explored freely. Echoing this idea, the concept of a personal homepage is frequently embraced as a substitute for mainstream profile-based social media platforms prevalent in the dominant culture. Instead of conforming to a set of predetermined traits from a limited list of options, personal homepages offer the chance to redefine those parameters and begin anew, detached from the conventions of mainstream society.

In short, both zines and homepages become a space in which people can experiment with identity, subcultural ideas, and their relation to politics, to be shared amongst like-minded peers. This act of sharing creates fertile ground for a wider network of individuals with similar goals of reconstructing their own identities, while also encouraging the formulation of their own ideals from the shared consciousness of the community. While zines are the fruit of individuals disenfranchised from the wider mainstream society, they become a springboard into larger groups merging into a cohesive community space.

Building and Sharing the Network

Amongst the alienation felt from being underrepresented in the dominant societal structures, it comes as little surprise that zinemakers often opt for creating their own virtual communities via the zines they publish. In an interview with Duncombe, zinemaker Arielle Greenberg stated "People my age... feel very separate and kind of floating and adrift", this is often counteracted by integrating oneself within a zine community. Traditionally within zine culture, this takes the form of letters from readers to writers, and reviews of other zines that become the very fabric, or content, of the zine themselves. This allows the zine to transform into a collaborative space that hosts more than a singular voice, effectively invoking the feeling of community. This method of forming associations creates an alternative communication system, also through the practice of zine distribution itself. For example, one subgenre of zines is aptly named 'network zines', and their contents entirely comprise reviews of zines recommended by their readership.

This phenomenon is exemplified in the establishment of web rings online. Coined in 1994 by Denis Howe's EUROPa (Expanding Unidirectional Ring of Pages), the term 'web ring' refers to a navigational ring of related pages. While initially, this practice gained traction for search engine rankings, it evolved into a more social context during the mid-1990s. Personal homepages linked to the websites of friends or community members, fostering a network of interconnected sites. In a space in which zinemakers are in opposition to the dictated mode of media publishing, the web ring offers an alternative way of organising webpages as curated by an individual entity, devoid of hierarchy and innate power structures from an overarching corporation, and places the power of promotion into the hands of the site-builders themselves, and extends to the members of the wider community.

Figure 4: Factsheet Five, the most common network zine. This zine’s primary function is to share and review other zines as a way to showcase the network, not unlike a web ring.
Figure 5: A selection of early web rings, curated by the owner of the website to promote certain pages that may share similar themes, ideologies, etc.

The importance of community, or the more favoured term, network, within the zinemaking practice is held in high regard. Due to its non-geographical nature or sense of place, the zines themselves act as a non-spatial network in which to foster this community. Emulating this concept of the medium as the community space itself, online communities also tend to reside within the confines of the platforms or forums that they operate within. For example, many contemporary internet communities dwell in parts of preexisting mainstream social media platforms such as Discord or TikTok, however, their use of these platforms is a more alternative approach than the intended use prescribed by the developers. Primarily using the gaming platform Discord as an example, countercultural communities create servers in which to disseminate and share resources through building topics within the server to house how-to guides and collect useful links to help facilitate handmade approaches to computational publishing practices. The nature of the server is to promote exchange and to share opinions on a wide array of topics, ranging from politics to typographical elements. These servers are typically composed of amateur users rather than professional web designers, fostering an environment where the swapping of knowledge and skills is encouraged. This continuous exchange of information and expertise creates a common vernacular, a shared language, and a set of practices that are distinct to the community. This not only develops the knowledge made available but also strengthens the bonds within the community, as members rely on and support one another in their collective pursuit of a more democratised digital space. In an online social landscape in which the promotion of self remains at the forefront, this act of distribution of knowledge indicates the existence of a participatory culture as opposed to an individualistic one, all united in shared beliefs and goals.

As zinemakers often come from a place of disparity or identify as the other (Duncombe 41), it is precisely this relation of difference, that links these zine networks together, sharing both their originality but also their connectedness through shared ideals and values. Through this collaborative approach, "a true subculture is forming, one that crosses several boundaries" (Duncombe 56). This method of community helps propagate both individualities while simultaneously sharing the very amongst peers, simultaneously allowing their own content to gain the same treatment in the future.

The FOSS movement present in feminist servers also speaks to the ethics of open source software and the free movement of knowledge between users. FOSS, or Free Open Source Software champions transparency within publishing allowing users the freedom to not only access the code but also enact changes to it for their own use (Stallman 168). In Adele C Licona’s book, Zines in Third Space, she also acknowledges the use of bootlegged material: "The act of reproduction without permission is a tactic of interrupting the capitalist imperative for this knowledge to be produced and consumed only for the profit of the producer; it therefore serves to circulate knowledge to nonauthorized consumers" (128). This manner of making stems from a culture of discontent against dominant power structures that control what and how that media is published. Zines are an outlet to express this discontent under their own restrictions and method of reaching their readers while uniting with a wider network of publishers doing the same.

Figure 6: Pervasive Labour Union Zine 10: Immateriality by Lidia Pereira. This is an example of a zine that examines common labour practices as a way to dismantle dominant structures and offer alternatives.

Though zines exist on the fringes of society, their core concerns resonate universally throughout the zine network: defining individuality, fostering supportive communities, seeking meaningful lives, and creating something uniquely personal. Zines act as a medium for a coalitional network that breeds autonomy through making while actively encouraging the exchange of ideas and content. Dan Werle, editor of Manumission Zine states his motivations for zines as a medium for his ideas: "I can control who gets copies, where it goes, how much it costs, its a means of empowerment, a means of keeping things small and personal and personable and more intimate. The people who distribute my zines I can call and talk to... and I talk directly to them instead of having to go through a long chain of never-ending bourgeoisie" (Duncombe 106). With this idea of control firmly placed in the foreground for zinemakers, these zines' aesthetic frequently reflects this, with the hands of the maker is evident in the construction of the publication through handwriting and handmade creation. The importance of physically involving the creator in the process of making zines demonstrates the power the user has over the technological tools used to aid the process, closing the distance between producer and process. Far from only addressing the ethics of the DIY ideals abstractly, zines become the physical fruits of an intricate process, thus encouraging others to get involved and do the same.

Paralleling the materiality of the medium, feminist servers emphasise the technology needed to build and host a server. Using DIY tools such as microcontrollers and various modifications to showcase the inner workings of an active server, the temporal nature of the abstract server is revealed. In this, the vulnerabilities of the tool are also exposed, microcontrollers crash and overheat, de-fetishising the allure of a cloud-based structure, thus, once again bridging the gap between the producer and the process.

Figure 7: Image by Mara Karagianni. Diagram of processes and infrastructure at play within the feminist server Systerserver.

Additionally to the question of labour practices in the dominant society, the concept of mass consumption is of real concern to those involved in zine culture. In the era of late capitalism, society has swiftly shifted towards mass-market production, leading to a surge in consumerism. Once a lifestyle made solely available to the wealthy upper classes, mass production of everyday commodities has democratised consumption, making products widely available through extensive marketing to the masses. Historically, consumers felt a kinship with products due to their handmade quality, however, this has diminished in the current market substantially, instead fetishising the hordes of cheaply made objects under the guise of luxury.

Zines are an attempt to eliminate this distance between consumer and maker by rejecting this prescribed production model. Celebrating the amateur and handmade, zines reconnect the links between audience and media through de-fetishising elements of cultural production and revealing the process in which they came to be. Yet again, using alienation as a tool, zinemakers reject their participation in the dominant consumerist model and instead opt for active engagement in a participatory mode of making and consuming. The act of doing it yourself is a direct retaliation to how mainstream media practices are attempting to envelope its audiences, through arbitrary attempts at representation, "because the control over that images resides outside the hands of those being portrayed, the image remains fundamentally alien" (Duncombe 127).

This struggle for accurate representation speaks back to some of the concepts stated in the first part of this paper, namely, how identity is formed and defined. With big-tech corporations headed solely by cisgender white men (McCain), feminist servers aim to diversify the server through their representation of the non-dominant society by sharing knowledge freely. Information is made more widely accessible for marginalised groups beyond the gaze of the dominant power structure and for those with less stable internet connections usually overfed with the digital bloat that accompanies mainstream social platforms. By sharing in-depth how-to guides for setting up their own servers, these platforms not only democratise the internet but also empower individuals to build and sustain more grassroots communities. This accessibility encourages a shift away from reliance on mainstream big-tech corporations, fostering a more inclusive and participatory digital landscape. By enabling users to take control of their own online spaces, these servers promote autonomy, privacy, and a sense of collective ownership.

Zines, similarly, propose an alternative to consumerism in the form of emulation, by encouraging the participatory aspect of zine culture through knowledge sharing, and actively supporting what would usually be seen as a competitor in the dominant consumer culture, readers are encouraged to emulate what they read within their individual beliefs creating a collaborative and democratic culture of reciprocity. The very act of creating a zine and engaging in DIY culture generates a flow of fresh, independent thought that challenges mainstream consumerism. By producing affordable, photocopied pages affording everyday tools, zines counter the fetishistic archiving and exhibition practices of the high art world and the profit-driven motives of the commercial sector: "Recirculated goods reintroduce commodities into the production and consumption circuit, upsetting any notions that the act of buying as consuming implies the final moment in the circuit." (Licona 144) Additionally, by blurring the lines between producer and consumer, they challenge the dichotomy between active creator and passive spectator that remains at the forefront of mainstream society. As well as de-fetishising the form of a publication, they present their opposition and dissonance by actively rejecting the professional and seamless aesthetics that more commercial media objects tend to possess.

Figure 8: Photograph from Transmediale Content/Form Workshop 2024. It depicts the Raspberry Pi used to host the publishing infrastructure of a web2print publication created during the workshop.

This jarring nature of rough and ready against progressively homogenous visuals demands the audience's attention and reflects the disorganisation of the world rather than appeasing it. Commercial culture isolates one from reciprocal creativity through its black-boxing of the process, while zines initially employ alienation to later embrace one as a collaborative equal. The medium of zines and feminist servers isn't merely a message to be absorbed, but a suggestion of participatory cultural production and organisation to become actively engaged in.

Conclusion

From the connections outlined above, the exploration of DIY computational publishing practices reveals significant parallels to the formation of zine culture, both serving as mediums for personal expression, community building, and resistance against dominant societal norms. By examining personal homepages, web rings, and feminist servers, this paper demonstrates how these digital practices echo the democratic and grassroots ethos of traditional zines. These platforms not only offer individuals a space to construct and share their identities but also foster inclusive communities that challenge mainstream modes of production and consumption.

Web rings facilitate a space of digital collectivism that mirrors the collaborative spirit of zine networks, these interconnected websites create a web of shared interests and mutual support, reminiscent of the way zines often feature contributions from various authors and artists within a community. This networked approach not only enhances visibility for individual creators but also strengthens the sense of community by fostering a culture of sharing and collaboration. The decentralised nature of web rings contrasts with dominant structures in play of mainstream social media platforms, where algorithms dictate the visibility and reach of content, thereby reinforcing existing power dynamics and limiting the diversity of voices from marginalised groups.

A feminist server reveals the radical potential of DIY computational publishing by explicitly aligning technological practices with feminist and anti-patriarchal principles and offer a space for these communities to connect and share resources. This ethos of mutual aid and empowerment is deeply rooted in the DIY tradition, reflecting the zine culture’s emphasis on community-driven knowledge production and dissemination. By reclaiming the tools of production, individuals can subvert the power structures that typically regulate mainstream digital spaces, creating platforms that reflect their values and ideals instead. While these connections and similarities reveal themselves through their ethos and approach, it should be acknowledged that the computational counterpoints mentioned within this paper come with a steep learning curve in technical literacy and undeniable privilege to create these tools in the first place. Even within the zine community itself, the openness and accessibility of the internet still exposes a digital divide resulting in the majority of zines remaining as physical publications (Zobl 5).

In the contemporary landscape of increasingly AI-generated media, the homogenous nature of digital content can be offset by these DIY practices, which carve out alternative spaces that celebrate diversity and individuality. AI-driven algorithms tend to prioritise content that aligns with dominant trends and consumerist interests (Sarker 158). In contrast, DIY computational publishing practices facilitate the exploration of authenticity and creativity, offering a place for those seeking to express their identities and ideas outside the confines of mainstream digital culture, albeit, by learning a host of technical skills. Ultimately, the intersection of zines and digital DIY culture illustrates a broader movement towards reclaiming creative and communicative agency in an increasingly smooth and familiar digital landscape. By embracing the principles of DIY culture, one can create a digital ecosystem that celebrates individuality, fosters community, and by recognising and supporting these grassroots efforts there is the potential to maintain a democratic and participatory digital culture.

Works cited

Carr, C. "Bohemia Diaspora," Village Underground, 4 February 1992.

Carmona, C. "Keeping the Beat: The Practice of a Beat Movement", Texas A&M University, 2012.

Casey, C. "Web Rings: An Alternative to Search Engines", Vol 59, No 10 College & Research Libraries News, 1998, 761-763.

Duncombe, S. Notes from underground: Zines and the Politics of Alternative Culture. Microcosm Publishing, 2017.

Hebdige, D. Subculture: The Meaning of Style. Routledge, 1979.

Howe, D. "Expanding Unidirectional Ring Of Pages." Denis’s Europa Page, 22 Dec. 1994, foldoc.org/europa.html.

Karagianni, M, Wessalowski, N. "From Feminist Servers to Feminist Federation," ARPJA Vol. 12, Issue 1, 2023.

Lialina, O. "Olia Lialina: From My to Me." INTERFACECRITIQUE, 2021, https://interfacecritique.net/book/olia-lialina-from-my-to-me/.

Lialina, O., Espenschied, D. and Buerger, M. Digital Folklore: To computer users, with love and respect. Merz & Solitude, 2009.

Licona, A.C. Zines in Third Space: Radical Cooperation and Borderlands Rhetoric. SUNY Press, 2013.

McCain, A. "40 Telling Women In Technology Statistics [2023]: Computer Science Gender Ratio" Zippia.com. Oct. 31, 2022, https://www.zippia.com/advice/women-in-technology-statistics/.

Sarker, I.H. "AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems." SN Comput Sci. 3, 158, 2022. https://doi.org/10.1007/s42979-022-01043-x.

Stallman, R. "Floss and Foss - GNU Project - Free Software Foundation, [A GNU head]," 2013. https://www.gnu.org/philosophy/floss-and-foss.en.html (Accessed: 17 May 2024).

Stallman, R. Free as in Freedom (2.0): Richard Stallman and the Free Software Revolution, Free Software Foundation, 2010.

Steyerl, H. "Mean Images," New Left Review 140/141, March–June 2023, New Left Review. https://newleftreview.org/issues/ii140/articles/hito-steyerl-mean-images (Accessed: 31 October 2023).

Systerserver, https://systerserver.net/ (Accessed: 17 May 2024).

Zobl, E. "Cultural Production, Transnational Networking, and Critical Reflection in Feminist Zines." Signs: Journal of Women in Culture and Society, 2009. https://doi.org/40272280.

Biography

Kendal Beynon (UK) is a Rotterdam-based artist and PhD researcher at CSNI in partnership with The Photographers’ Gallery, London. Her work is situated in the realm of experimental publishing and internet culture. Completing a bachelor’s degree in the UK in Music Journalism in 2013, she went on to receive her MA degree in Experimental Publishing from the Piet Zwart Instituut, Rotterdam. She also is heavily engaged with the zine-making community by hosting workshops and co-organising the Rotterdam-based zine festival, Zine Camp, and creating a community of old web aficionados at Dead Web Club.

Bilyana Palankasova

Between the Archive and the Feed

Between the Archive and the Feed

Feminist Digital Art Practices and the Emergence of Content Value

Abstract

This article discusses feminist performance and internet art practices of the 21st century through the lens of Boris Groys’s theory of innovation. It analyses works by Signe Pierce, Molly Soda, and Maya Man, to position practices of self-documentation online in exchange with feminist art histories of performance and electronic media. The text proposes that the discussed contemporary art practices fulfil the process of innovation detailed by Groys through a process of re-valuation of values via an exchange between the everyday, trivial, and heterogenous realm of social media (‘the profane’) and the valorised realm of cultural memory (‘the archive’). Using digital ethnography and contextual analysis, framed by the theory of innovation, the text introduced ‘content value’ as a feature of contemporary art on the Internet. The article demonstrates how feminist internet art practices expand on cultural value through the realisation of a process of innovation via an intra-cultural exchange between the feed and the institution.

Introduction

This article considers feminist performance and internet art practices from the twenty-first century against Boris Groys’ theory of innovation, proposing that these practices fulfil the innovation process theorised by Groys through a re-valuation of values and intra-cultural exchange between online spaces (particularly social media), as profane, and the institution(s) of contemporary art, as the archive.

The article will discuss the practices of three multi-media artists who work with performance online, use self-documentation methods, and incorporate archival motifs in their work - Signe Pierce, Molly Soda, and Maya Man. Their practices will be framed as exemplifying artistic and cultural innovation following Groys’ conceptual framework of ‘the new’ by relying on intra-cultural exchange between feminist art histories and practices (the archive) and online content (the profane). The theoretical framework is applied to readings of the artworks constructed through digital ethnography.

Signe Pierce is a multi-media artist who uses her body, the camera, and the surrounding environments to produce performances, films, and digital images with a flashy, neon, LA-inspired ‘Instagram’ aesthetic. In her work she interrogates questions about gender, identity, sexuality, and reality in an increasingly digital world and identifies herself as a ‘Reality Artist’ ("Signe Pierce").

Moly Soda is a performance artist and a “girl on the internet” since early 2000s when as a teen she started blogging on Xanga and LiveJournal and in 2009 she started Molly Soda Tumblr (Virtual Studio Visit: Molly Soda). Her work explores the technological mediation of self-concept, contemporary feminism, cyberfeminism, mass media and popular social media culture.

Maya Man is an artist focused on contemporary identity culture on the Internet. Her websites, generative series, and installations examine dominant narratives around femininity, authenticity, and the performance of self online. In her practice, she mostly works with custom software and considers the computer screen as a space for intimacy and performance, examining the translation of selves from offline to online and vice versa.

Alongside a discussion of the documentary properties and archival themes in these digital practices, the article proposes content value as an artistic attribute, having emerged out of the exchange between the pervasiveness of networked technology (profane realm) and the established tradition of cultural archives.

Theory of Innovation

Innovation is a multifaceted term with different definitions and implications depending on discipline and context, whether it is cultural, artistic, scientific, or technological. Particularly throughout the twentieth, and especially in the twenty-first century, in a cultural context, innovation and the ‘new’ have been thought of as a method of resistance to the banality of commodity production (Lijster, The Future of the New: Artistic Innovation in Times of Social Acceleration 10). At the same time, with the accelerating pace of technology, innovation gained proximity to capitalist agendas and is overwhelmingly used to describe new products or services, often aligned with techno-solutionist approaches. Because of this prominence of technological innovation and its fundamental entanglement with capitalist production, innovation has become problematic in the context of contemporary art (11). In this context of inherent contradiction, can art still deliver social and cultural critique while being subjected to the steady pace of innovation? Philosopher of art Thijs Lijster identifies some of the more prominent aspects of innovation in relation to art and social critique: innovation’s capacity for critique, the distinctions between concepts and practices of innovation, the importance of innovation to artistic practice, and the capacity of innovation to be separated from acceleration (12). This article will consider innovation and the concept of the ‘new’ in its historical and institutional dimensions by using as an analytical framework Boris Groys’s theory of cultural innovation, which he developed in his book On the New. In line with this, the use of ‘innovation’ here refers to processes of cultural innovation which are interlinked with and produce processes of artistic innovation.

Overview of the Theory

In On the New, Groys suggests that contemporary culture is driven by the urge to innovate, and the process of innovation is closely entangled with the economic logic of culture. He claims that the idea of the new has changed and rather than meaning truth, utopia, or essence in cultural difference (which it did in modernist discourse), the new is defined by its positive and negative adaption to the traditional and established culture. Groys applies this principle to artworks in the sense that their value is determined by their relation to other artworks, or cultural archives, not to extra-cultural dimensions. At the same time, he suggests that there is a constantly shifting line of value between these cultural archives and the superfluous, profane realm of the everyday. Innovation occurs when an exchange is realised between the cultural archive and the profane. In this context, innovation means re-valuing what is already valued or established and cross-contaminating it with what is trivial, every day, and profane, so the new can emerge. Groys discusses his theory via examples from art, writing, and philosophy but the epitome of this process of innovation for him are Marcel Duchamp’s ready-mades.

In this theoretical framework, I propose a reading of contemporary feminist internet practices as exemplifying the process of cultural innovation as described by Groys. The incorporation of content strategies into contemporary performance practice on the Internet is framed as a process of exchange and re-valuation of values between the established tradition of feminist practice and the vulgarity of contemporary networked social media. In this context, I propose ‘content value’ as a feature of contemporary art, having emerged at the interface of critical creative practice, documentary performative practices online, and Web 2.0 social platforms.

The article is divided into three parts, each considering a segment of the theory of cultural innovation as developed by Groys to draw an analytical framework for thinking about artistic and cultural value at the intersection of art and content. The theory is presented in three subsequent parts, reflecting three key features of the process of innovation as described by Groys: "The Archive & The New", "The Value Boundary", and "Innovation as Re-evaluation of Values". Each theoretical part is paired with a discussion of the cultural and artistic conditions pertaining to the application of the theory to the subject of the article. In the first part, I discuss the cultural archives, or established art historical values, against which contemporary practices are valued. In the second part, I elaborate on how the value boundary is crossed by the use of content-as-document. In the third part, I suggest that ‘content value’ emerges as a feature of contemporary art in the last decade, as part of a process of cultural innovation based on exchange between institutionalised and valorized culture and networked social technologies.

The Archive & The New

On the New

On the New was first published in 1992 in German in the already mentioned context of increasing distrust towards innovation. Groys offers a theory which rejects the modernist implications of innovation, such as utopia, creativity, or authenticity, and instead positions innovation as a process of re-valuation of values. He wrote On the New against the debate of impossibility of new culture, theory, or politics and rather than supporting this impossibility for new culture to emerge, he positions the new as an outcome of the economic mechanisms of culture through a reinterpretation of older theories of innovation, to argue that the new is, in fact, inescapable. To understand or establish what the new might be, Groys stresses the importance of firstly dealing with the value of cultural works and where it comes from.

Groys posits that a work acquires value when it is modelled after a valuable cultural tradition - a process termed ‘positive adaptation.’ He opposes this process to the process of ‘negative adaptation’ – when a work is set in contrast to such traditional models (Groys 16). In this way, Groys positions cultural value as constructed by objects and practices’s relationship to tradition and established cultural and artistic norm; or their relationship to other objects and practices existing in certain cultural archives in a hierarchy of public institutions. These cultural archives are institutions such as libraries, museums, or universities, fulfilling a role of storing works in particular value hierarchy. In this context, the source of a cultural object’s value is always determined by its relationship to these archives: in the measure of “how successful its positive or negative adaptation is” (17). Or in other words, a cultural object’s value is based on its resonance or dissonance with ‘the archive’ where the use of archive stands for the institutional cultural spaces which collect, preserve, and disseminate cultural knowledge within an established hierarchy of values; the archive is the collective expression of institutionalised histories and practices in art and culture.

In this sense, Groys’s reading of the new and innovation is opposed to the modernist understanding of the new as utopian, true, or an extra cultural other[1] of the orienting mechanism of culture itself. This intra-cultural new is atemporal, it is not reliant on progress, it is grounded in the now and stands as much in opposition to the future, as it does to the past (41). According to him, the new is not an effect of original difference, is not the other, but emerges in a process of intra-cultural ‘revaluation of values’ - the new is always already a re-value, a re-interpretation, “a new contextualisation or decontextualization of cultural attitude or act” conforming to culture’s hidden economic laws (56–57). This new is achieved through the re-interpretation and re-contextualisation of existing values in a process of exchange between cultural domains. The revaluation of certain culturally archived values is the economic logic of a recurrent process of innovation.

To ground this in the article, I will discuss the cultural memory context, or the art historical background against which the works in questions are considered and valued. This part explains the relationship of the new to the cultural archives and elaborates the role of new technologies in a process of exchange with histories of feminist art.

Cultural Archives

The first part of Groys’s theory suggests that the source of a cultural object’s value is determined by its relationship to cultural archives and in the measure of how successful its positive or negative adaptation to these archives is. And through this adaptation and exchange, the process of re-valuation of values produces the new. In this section, I will establish the cultural archive, or art historical tradition, against which the practice of Pierce, Soda, and Man adapt to produce the new.

To determine what these practices’s relationships to established cultural and artistic norms are, we need to consider them against a tradition of practices existing in the cultural archives. Following Groys’s theory, the cultural value of ‘new’ feminist internet performance art is determined by its relationship to art historical context or lineage of similar work, in the measure of how successful the positive or negative adaptation to them is. The works of Pierce, Soda, and Man are underpinned by rich histories at the intersection of performance art, moving image, net art, and post-internet art dealing with gender, identity, and femininity.

The artistic traditions in question could be traced back to the 1970s, when Lynn Hershman Leeson started experimenting with fictional characters. Works like Roberta Breitmore (1973-1978) and Lorna (1979-1983) employed interaction and technology, such as TV and new video formats like LaserDisc, to critique the performance of gender (Harbison 70). Later, Hershman started making Internet-based projects, such as CyberRoberta (1996) (Harbison 71) to further examine the authenticity of self and the complications and anxiety emerging with the arrival of the Internet. Hershman’s The Electronic Diaries (1984-96) is a video series representative of a period in the 1980s and 1990s when artists were experimenting with cyberspace as a social space for self-representation and liberation (Harbison 7). In the videos, the artist confronts fears and traumas through recoded confessions, speaking directly to the camera. The tapes are fractured and use digital effects to reflect the psychological changes and misperceptions of self the protagonist experiences (Tromble 70). The series became a key work for the artist and for artists’ video more broadly as it restated and resonated with a lot of the feminist video themes of the preceding decade. At the same time, her work engaged with the new politics of representation that had emerged with the affordances of new communication and media technologies (Harbison 66).

Hershman’s contribution to what Rosalind Krauss at the time had termed ‘aesthetics of narcissism’ (Krauss) was suggesting that the separation of the subject and its fictional manifestations in the electronic mirror, need to be considered not just in the immediacy of the technological medium, but in the entire system and network of televisual environment (Tromble 145). Exploring early ideas around surveillance and confronting your image in real-time through new media, Hershman stresses the need for the mediated reflection to be understood not simply as a reflection of reality but as a force which actively manipulates and constructs reality. Thus, her work presents the screen as a space where identities are negotiated, constructed, and reconstructed. Through embedding the subject’s image within a larger television context, Hershman highlights the complexities of identity as mediated, fragmented, and entangled with fictional representations in the context of the pervasive influence of television.

Extending the thematic line of fictionalisation and trauma in another artistic context, mouchette.org (1996-ongoing) by Dutch artist Martine Neddam is an influential work of net art which takes the form of a personal website of a fictional character – a 13-year-old girl named Mouchette. The website takes the form of an interactive diary of a girl who shares thoughts on death, desire, and suicide. Neddam uses the characteristics of the web in ways which serve the story, such as confusing hyperlinks circulations, interactivity which layers the story, and constant identity play performed through a virtual character. The interactivity of the web and the nature of the work foster an active community of engaged audience which follows the work and could even use the website for their own projects (Dekker 152). Because of its expansive and pervasive character, the artwork has been a key case study in documenting and preserving net art since the project has generated documentation based on people’s experiences and memories, as well as documents of the site itself, making Mouchette a performative site and its own archive (Dekker and Giannachi 10). The interactivity and technological conditions of HTML also mean that the artwork facilitated these exchanges through anonymous interactions allowing the exploration of ambiguity and ethics in a way which today’s regulated online environments would impede.

Another key communication technology which was used in artistic performative experiments and became a transmitter of online female identities was the webcam. Ana Voog was the first artist to call her work ‘webcamming art,’ using her webcam as a tool to create Anacam (1997) – a twenty-four hours a day live broadcast of the artist’s home. Voog streamed daily activities, such as cooking, cleaning, having sex, chatting with cam-watchers, and hosting visitors. Alongside her vernacular, domestic activities, Voog also included performance art and visual experiments (Lehner 119). Blending performance and authenticity, Voog was one of the first artists to engage in continuous webcam streams and interacting directly with her audiences. Her use of technologies like a webcam and the Internet created art which was personal and had a new kind of immediacy for the audience.

A decade later, Petra Cortright took a different approach to webcam art with VVEBCAM (2007). The artis recorded herself staring into her webcam while playing with the various visual effects of her 20 USD webcam, including overlays of animated pizza slices, cats, and snowflakes (Lehner 120). The video was uploaded on YouTube and marked a significant departure from the typical camgirl genre where rather than addressing the camera and audience directly and engaging with erotic themes, Cortright documented herself as an immersed user, interacting with her computer ("Net Art Anthology"). In the video, Cortright appears to not wear makeup and is dressed casually, seemingly purposefully ‘unsexy’ (Lehner 120). When the video was uploaded to YouTube, she added tags, which rank higher in search engines and attract users looking for explicit content, like “tits vagina sex nude boobs britney spears paris hilton” (Soulellis 428). VVEBCAM took place on the cusp on a massive transformation in the way online users engage with posts, as soon after, the feed emerged – Facebook’s newsfeed ‘The Wall’ was introduced in 2006 and the iPhone was launched in 2007 (Soulellis 428). At the same time, artists started experimenting with posting on surf clubs and other blogs as a method to draw attention to the artistic value of user content (Moss 149).

Posting originally evolved into a metaphor for an online condition in the early days of the Internet but especially with the expansion of networked activity in the 1980s, and was a term which transitioned from newspaper bulletin boards and drew on wider traditions of announcement by and within a community (Soulellis 425–26). Relatedly, the origins of ‘content’ can also be traced to publishing metaphors and related to the visual or textual material in books, magazines, and newspapers. At the turn of the century, open and participatory web and digital media tools, like blogging, and particularly in 2005 with the launch of YouTube, brough fundamental changes in the media and information environment, in the way that audiences responded to culture online (Burgess 60).

In this context, another key moment for creating performance and video art for the new Web 2.0 platforms was Ann Hirsch’s The Scandalishious Project (2008-2009). For eighteen months, Hirsch’s fictional character Caroline Benton, a self-described camwhore, hipster, and freshman college art student in upstate New York, uploaded over a hundred monologic, blurry, low-resolution webcam videos to her YouTube channel Scandalishious (Steinberg 47). In the series, the artist danced in front of her webcam, vlogged, and engaged with followers, which responded to and commented on the videos ("Net Art Anthology"). The videos are satirical and humorous and play on stereotypes about sexiness and heterosexual desire (Brodsky 91–104). Scandalishious is a very early example of social media-based work and in the performance, Hirsch addresses the subject of girls online via an exploration of self-representation as feminist practice ("Net Art Anthology").

The artworks discussed here inform the context in which Pierce, Soda, and Man work and form the cultural archives against which the practices of the three artists are valued. They represent a lineage of works using performative and documentary strategies (similar to previous generation of feminist practice) and new technology – such as TV, LaserDisc, HTML, webcams etc. This illustrates a trajectory of feminist performance artists engaging with multimedia technologies, Web 1.0, and later Web 2.0. The arrival of Internet platforms for text and visual media, such as blogs, surf clubs, and later YouTube and others introduced online posting as an artistic method which also drew on the community fostering aspects of the digital platforms. Concerned with girlhood, femininity, hyperreality, cyberfeminism and subjectivity of identity, these artworks and practices use technology and documentary strategies to perform, curate, collect and archive selves via new media and digital technologies. The work of Pierce, Soda, and Man positively adapts to the cultural archive and steps on these histories and traditions to perform a process of innovation where new practices emerge through revisiting the established critical value of the art historical examples. At the same time, it negatively adapts to the archives by engaging them in a process of exchange with the vulgarity of contemporary networked social media.

This is not to say that the argument of the text is that the contemporary works are innovative compared to their art historical predecessors in a way which detracts from tradition. Rather, following Groys, innovation here is regarded as an ever-repeating cultural process which ultimately adjusts established cultural values to repeatedly incorporate the transitory everyday, to produce cultural and artistic newness. The practices of Pierce, Soda, and Man are interrogated as representative of the period 2014 – 2024 when the ubiquity of networked communication technologies and user-generated content (along with the existing traditions of feminist performance using technology described above) produce the conditions for the emergence of ‘content value’ as a feature of contemporary art.

Value Boundary

On the New

Foundational principle of how cultural archives function is that they embrace the new and reject the derivative - what’s new is understood as different, while also being as valuable as the old. At the same time, “organised cultural memory rejects as superfluous and redundant everything that merely reproduces what already exists” - Groys identifies this domain which comprises all things that are not included in the archives, as ‘the profane realm’ (64). He suggests that there is a constantly shifting line of value that separates ‘the archives’ from ‘the profane realm,’ where ‘profane’ consists of that which is perceived to be valueless, extra-cultural, and transitory. Within this conceptual framework, something becomes new when it moves from the profane realm to the archives, as the profane, by virtue of its heterogeneity, becomes “a reservoir for potentially new cultural values” (64). The proposition of this article is that content (or user-generated content), as a dimension of artistic work, has realised this move from the profane realm of pervasive social media to the archives (or to institutionalised culture).

To ground his theory, Groys uses Duchamp’s ready-mades as an example and discusses his work L.H.O.O.G (1919) as well as The Fountain (1917) to compare profane things with cultural values. In 1919, Duchamp drew a moustache and a goatee on a postcard depicting Mona Lisa. The title is a sort of word play and eliding the words in French sounds like “Elle a chaud au cul,” or “There is fire down bellow” (Duchamp). Duchamp’s interpretation of Mona Lisa is a “mutilated reproduction, which is basically a piece of trash” (65) and by confronting da Vinci’s work with its derivative, Duchamp exposes them as two different visual forms, suggesting there is no essential criteria to distinguish them based on their value. And if the “piece of trash” is as beautiful as the Mona Lisa, then [we should] “consider every hierarchising value distinction between the two images to be an ideological fiction designed to justify the domination of certain institutions of cultural power” (66). The newness in this comparison emerges in the act of value juxtaposition of two things which are usually assigned different values. The comparison does not eradicate the value hierarchy but means that “the trashy reproduction, regarded as a new object, gains access to the system for the preservation of culture” (66). As a result of the comparison, the reproduction is valorised and gains cultural value, since it presents itself as the other, or the profane, while at the same time due to a certain critical analysis, is also similar to existing cultural values. This valorisation, however, does not affect the fundamental distinction between the archive and the profane; the fact that a comparison’s been made across the value boundary doesn’t eradicate the boundary, it just modifies it (66).

To ground this part of the theory, in the next section I consider networked social platforms as the profane to discuss the shifting line of value between the archive and the profane in the works of Signe Pierce and Molly Soda.

Content crosses the value boundary

Performative practices on the Internet, and particularly on social networking platforms, are simultaneously performance, documentation, and content. Content aesthetics and behaviours become a feature of such online practices by occupying an artistic, documentary, and public social space. Between the archive and the feed, the content characteristics of these practices cross the value boundary between the valorised and the profane, often via photographic documentation. With the novelty of networked social technologies long gone, subscribing to the behaviours and politics of the feed is the epitome of the everyday, transitory, heterogenous, profane realm of contemporary life. If Hirsch and Cortright posted on YouTube as a practice of experimenting with new networked technologies, a decade or so later social media is fully integrated into online performative practice. At the same time, posting as part of practice borrows qualities and behaviours from the archive itself, such as indexing, tagging, collecting, preserving, and curating. Through these behaviours and conditions, content as part of art, shifts the value boundary between the archive and the profane.

While there are examples of early artistic engagement with social media, some of which were discussed in the previous chapter, the foundational artwork for performative practice on social media is Amalia Ulman’s Excellences & Perfections (2014). Ulman performed a fictional makeover on Facebook and Instagram, in which for several months, she did a scripted online performance. Using her social media profiles, Ulman performed various makeovers and lifestyle fantasies, including a breast augmentation, strict Zao Dha Diet, and regular pole-dancing lessons. Packaged in the form of content, the artist used various sets, props, and locations to critique consumerist fantasies by succumbing to what social media demanded of her to be – a ‘hot babe’ ("First Look"). Excellences & Perfections is the work indicative of the start of the period in question in this text and a prime example of Groys’s theory in the way it introduces content aesthetics into a critical artistic context.

Signe Pierce particularly draws on content and Instagram aesthetics and dynamics in her work to reflect on the immediacy and consumption associated with social media content, as well as the urge to capture, record, and document. Two examples are When You Die, Your Camera Roll Flashes before Your Eyes (2019) – an expedited infinite scroll-through of the artist’s iPhone photo library and Digital Streams of an Uploadable Consciousness: Stories 2016-2019 - a 20-minute-long amalgamation of the artist’s Instagram stories and Snapchats from the three-year period. Pierce describes these works as “things that I’ve uploaded to the Internet to be consumed by people” (Bucknell and Pierce). They are not chronological, nor cohesive and constitute a live growing archive – the artist was interested in building an archive of digital streams that quantifies her reality as a purging process in which she exports herself before reaching the next stage of her work (Bucknell and Pierce). The work emphasises the transient nature of digital content while at the same time it is highlighting the impact of these digital narratives on self-representation. Through the documentation and archiving of ephemeral Instagram stories, Pierce suggests that content accrues value over time as part of a digital archive. The works are a heterogenous stream of self-documentation consisting of purely aesthetic or representation content alongside critique of techno-capitalism and data rights violations. Both these works reflect Pierce’s examination of “art world hierarchies and the currency of digital content” (Signe Pierce). Content is both the subject and the method of the work - the framing of digital content as currency lends itself useful in thinking about content as crossing a value boundary and being elevated to art status. Pierce frames everyday digital artefacts like Instagram stories and camera roll images as significant cultural objects, blurring the line between the archive and the profane.

Through her interest in art world hierarchies, Pierce reflects on the exclusivity and inaccessibility of the art world and the capacity for the Internet and social media to reach beyond the art world. This is particularly evident through one of her most accomplished works - American Reflexxx (2015). An unscripted video footage in collaboration with director Alli Coates, in the short film the artist is walking down Myrtle Beach Boardwalk wearing a mirrored mask and an electric blue mini-dress. It was uploaded on YouTube in 2015 and immediately went viral reaching over 2 million views in its first week. The project represented a filming of “the cyborg walking around” and while Pierce walks down the Boardwalk performing, she gets subjected to various forms of mistreatment, ridicule, and abuse, which the authors did not anticipate. One passer-by shouts “It’s just pretentious performance art” (Figure 1) and in an interview for Mousse Magazine, the artists shares this was the only instance for the hour-long duration of the performance that somebody mentioned art (Bucknell and Pierce). At the same time, people could constantly be seen filming her and one is heard saying, “I’m putting this on Instagram” (Figure 2). In juxtaposing these two perceptions by Pierce’s audience – one of pretentious performance art, and one of content worthy of virality – we could conceptualise of the shifting line of value between those two domains.

Figure 1: Signe Pirece, American Reflexxx (2015) screenshot
Figure 2: Signe Pirece, American Reflexxx (2015) screenshot

Instagram aesthetics and social medial logics underpin Pierce’s wider practice and become a critical dimension of her ‘Reality Artist’ persona. Alongside conceptual works, Pierce also creates hyper-saturated photographs of Los Angeles palms and neon strip malls which have a distinct visual language drawing on hyperreality, LA aesthetics, and ‘trash culture’ and evocative of vaporwave with its play on consumption, nostalgia and self-referentiality. For instance, in an Instagram post from 2018, the artist shares a cyborgian image of herself taking a selfie with a selfie stick and surrounded by a green light halo. In the caption, she dwells on the vulgarity of being perceived photographing yourself and the alienation of performing for the machine (Figure 3). Adoring fans have filled the comment section with pledges of love and appreciation of the artist’s creative genius not unlike a celebrity fan club. Content is key feature of the artist’s aesthetic and posting has become part of practice, blurring the line between production, presentation, distribution, and consumption. Networked media has introduced new mechanisms of production and spectatorship, and new forms of value. This is not a condition which compromises the validity of the aesthetic experience provided by the museum, but one which states the contribution of networked platforms to processes of cultural value. The established cultural values of femininity, performance and hyperreality at the critical junction of documentation and digital distribution, are taken out of the institutional archive and into the profane realm of the transitory feed. Here, the documentary, and particularly self-documentation, become the vehicle for the shifting line of value between the archive and the feed.

Figure 3: Signe Pierce Instagram post from July 6, 2018.

Molly Soda’s practice includes video performances, social media posts and gallery installations and her work exists on platforms such as Tumblr, YouTube, and Instagram. In her work, she documents and explores processes of constructing, surveilling, and documenting herself. Over time, she became increasingly fascinated with how people construct and perform identities online as part of an evolving culture of trends, codes, and communities with their own vernacular. The artist performs herself as a character within the space of her home and her bedroom has become a widely recognised iconic space on the Internet. Through Soda’s perspective, the Internet is an aspirational space and an embrace of the multiplicities of character you could explore through it, especially in the very millennial way of being confessional online and the self-consciousness and anticipation of other people’s perception of you (Virtual Studio Visit: Molly Soda).

Me Singing Stay by Rihanna (2018) is a key work, which came out of the artist’s love for girls singing alone in their rooms. The artist was obsessed with the song ‘Stay’ by Rihanna and started compiling a playlist of YouTube videos of girls singing it. Eventually, the artist created a choir of 42 videos with a recording of herself in the centre, also singing the song (Soda). In her work, Soda extensively draws on documentary methods, in line with artist previously discussed in the article. Here, she expands and reverses the documentary space of performance by inserting herself in a collection of digital artefacts. Drawing on the Internet’s culture of vulnerability and immediacy, the artist grounds the work in intimate, emotional, bedroom performances, by curating a collection of digital artefacts out of the heterogenous chaos of the platform. This way, the work positively adapts to a tradition of self-documentation and technological intimacy, while also negatively adapting to such tradition by challenging the hierarchies of cultural archives through appropriating their methods. The line of value between the archive and the profane, or the artwork and content, is also challenged and set in motion by the archival behaviours of collecting and curating, performed by the artist in a transient online space. These methods of collecting content could also be observed in the work Me and my Gurls (2018) consisting of the artist dancing alongside animated GIF gurls joining her in the video, each trying to look sexier than the previous one. Performative artistic practices online are often underpinned by the creation of self-images which are framed as empowering and have become a sort of vernacular photographic practice which embraces “conventions of posturing the self to rehearse certain cultural stereotypes” (Proulx 115). Soda “overidentifies” with the image of the self-empowered, hyper-feminine bedroom camgirl” and through selfies, GIF blog posts, glittery and pink clichéd camgirl imagery depicts a subversive feminine image of unshaven and menstruating body (Proulx 115–16). In a sense, she critiques the mainstream media representations of women by propagating subversive images in the everyday realm where they thrive the most – on social media. This section discussed examples from Pierce and Soda’s practices to illustrate the ways in which the value boundary between art and content shifts through the artists’ use of documentary, archival, collection and curation methods. Applying these approaches to the heterogenous nature of online platforms, they realise a mobility of values in exchange between cultural tradition and Internet vernacular, which produce the conditions for the emergence of ‘content value’ as a key feature of artwork produced on the Internet.

Innovation as Re-valuation of Values

On the New

Crucially, Groys theorises of innovation as an exchange – the hierarchy of values held by the archive is reorganised by a cultural-economic form of exchange “between the profane realm and the valorised cultural memory” (139). In this context of both subscribing to and challenging institutionalised cultural value, Groys suggests that cultural innovation is a process realised by a strategic synthesis of positive and negative adaptation to the valorised cultural tradition because the new still exists and defines itself against the old (107–08). The result of this process is that things in the profane realm become valorised and enter the cultural archive, while other cultural works are devalorized and enter the profane realm. Importantly, this is not to say that if this process of innovation devalorizes certain cultural values, it also detracts from them – to reference the previously discussed example, da Vinci’s Mona Lisa is just as admired after Duchamp, as it was before (Groys 73).

Here, innovation constitutes an egalitarian gesture establishing an equalising moment between the valorised culture (or the archive) and the profane realm. However, valorised culture inherently assigns importance to this gesture and as a valorised realm, it is only seemingly criticized by it. Instead, every such process of innovation fulfils the cultural-economic mechanism and contributes to the expansion of both valorised cultural memory and the hierarchy of institutions which ensure its functioning. In a sense, this innovation process reinforces and maintains the power of the established cultural archive, as it is always grounded in the re-valuation of values, recalibrating. Because of this, it is futile to try an answer a question about the meaning of innovation, as this is a question about innovation’s relationship to extra-cultural reality. What’s relevant to culture is not the meaning of innovation, but the value which drives the process of innovation. Or in Groys’ words, “For culture as a whole, in any event, all that matters in each individual instance is that the value boundary separating cultural memory from the profane realm was successfully crossed and that an innovation occurred as a result” (Groys 74–75).

The Content Value of Art

In this theoretical context and innovation framework, the final part of this article extends further the proposal of ‘content value’ as underpinning the exchange between the archive and the profane. Building on the examples from the cultural archive, and the contemporary works discussed, the article considers other examples of Soda’s work alongside the work of Maya Man as an example of the emancipation of content-as-art from the documentary.

making an iced coffee (2023) is a YouTube video performance of Soda preparing a huge iced coffee using large amounts of incredients, incluidng syrup, milk, variou creams, and candied cherries, while wearing a pink bikini top. The silent deadpan video immediately invokes Martha Rosler’s Semiotics of the Kitchen (1975) and in a similar way subverts common stereotypes and perception of women, particularly via the trope of cooking videos and TV housewifes. The work could be interpreted as positively adapting to the cultural archive via the comparison to Rosler, while it negatively adapts to it via its presentation as content. Alongside its video form on YouTube, the work also exists as 12 stills grid image on Instagram (Figure 4). Through excess and indulgance, the work suggests consumerist fantasies with the artist at the centre, drawing on the trope of the hot girl online. Satirising the ubiquity of ‘how-to’ videos, the work simultaneously critiques the commodification of femininity and the proliferation of content.

Figure 4: Molly Soda, how to make an iced coffee (2023), Instagram post, 12 May 2023

Consumerism fantasies, wealth, and wellness trends are recurring themes in Soda’s practice, which she also explores in works such as it just smells like literally like you're sitting on the beach drinking a margarita and you're loving your life and you're super rich and like you own a yacht (2020) and the subsequent My Candle Collection (2021), which use scented candles. The artist has reflected on these suggesting “sometimes I think my work is about shopping” (Virtual Studio Visit: Molly Soda). This statement could also be supported by the artist’s consistent engagement with and content about food-related household objects and places, like the kitchen, or the pantry (Figure 5). What’s in my pantry (2023) suggests a parallel with the ‘What’s in My Bag’ trope in feminine lifestyle content, to reverse it and instead curate an alphabetical collection of ingredients and spices, found in the artist’s pantry. While we could read this through the lens of shopping, it is also another instance of the artist adopting archival behaviours – cataloguing, indexing, collecting, curating, presenting etc. This reading is evocative of the duality of the archive[2] and the supermarket, which Groys suggests in an interview, discussing On the New. He describes the supermarket and the museum as the two models in our civilisation, extending a similar argument he does in On the New, where ‘supermarket’ is framed as transient and focused on the now, whereas ‘the museum’ allows for comparison, because it preserves the old (Lijster, "The Future of the New: An Interview with Boris Groys"). In a sense, What’s in my pantry exemplifies this oscillation between the supermarket and the museum, or the profane and the archive. Content here emphasises the inclination and capacity of networked social platforms to imitate the archive by replicating its values – to save, collect, curate. Content crosses the value boundary between the two domains in the context of posting-as-practice and content-as-art, to point to the emergence of ‘content value’ as a feature of contemporary digital practice.

Figure 5: Molly Soda, What’s in my pantry (2023), Instagram post, 24 March, 2023.

In this framework of art-as-content and content-as-art, I’d like to consider the practice of Maya Man, whose work is often performative and text-based and exists at the boundary between art and content, while engaging with themes of girlhood through an online lens. In July 2022, Man was featured on Instagram’s Instagram (Figure 6).

Figure 6: Maya Man featured on Instagram’s Instagram, 11 July 2022.

In online performance, there’s often intentional confusion of the identity of the artist and the artwork itself. “The asynchronicity of social media forces us to watch ourselves. Logging on, we are confronted with versions of us that we have broadcast in bits and pieces of imagery, video, and quippy snippets of text. As social media platforms began infiltrating into our everyday lives in the late 2000s to mid-2010s, artists pushed things one step further, forcing an audience to watch them watch themselves” (Man, "The Artist Is Online"). Akin to the work of Pierce and Soda, Man’s permeates the space of both content and art while dwelling on issues of self-surveillance, self-documentation, and self-archiving via the performance of an anti-authentic versions of self. The artist complicates the idea of ‘a real self’ or ‘be yourself’ by embracing almost anti-curatorial approaches to documenting her selves online. With an emphasis on chance or randomness online, Man looks to archive the mundane but also intimate vernacular of the desktop and the user looking at it. This is the principal achievement of the generative browser extension work Glance Back (2018) – a daily photo diary, capturing the moments shared between you and your computer (Figure 7). Once a day at random when you open a new tab, Glance Back will quickly snap a picture of the user and prompt you to label it by answering the question ‘What are you thinking about?’ Once answered, the photo will be saved creating an archive of moments shared between you and your screen. Here, the practice of self-documentation has moved on from experiments with emerging technology to rather focus on the banality and day-to-day intimacy between us and our computers in the mundane ordinary online motions and interactions we engage in.

Figure 7: Maya Man, Glance Back (2018), Instagram post, 20 July 2019.

Man’s work is deeply concerned with self-representation, self-surveillance, documentation, reflection, archive, collection (Figure 8). Online, the boundary between art and content is obviously blurred and posting is a form of self-actualisation (Johnston and Man). Glance Back is about the artist’s relationship with her computer, but also how with time it has become an archive of herself of small moments that she wouldn’t otherwise document. “Usually I’m so plugged into the portal of my desktop that I can’t consciously conjure whatever I’m thinking about, but it’s really nice to have an interruption that forces me to archive it” (Johnston and Man). Here, we’re faced with a confrontation between the logics of the archive and the platform – the documentation of random moments is a symptom of the different logic of archiving online, and therefore of the different value orientation in an online environment of continuous and pervasive content generation.

Figure 8: Maya Man, Glance Back Instagram post, moments in browser software 4 self documentation reflection archive collection, 14 July 2023.

In Man’s work, content has fully assumed its value and power. The documentary strategies of self-recording and self-surveillance are only one method, as the photographic document is no longer the key to transcend the profane into the cultural archives. In earlier works discussed, photographic documentation was the vehicle crossing the value boundary and positively adapting the new to the art historical tradition before it. In this later stage, content in artistic context is emancipated from the documentary to surface as an indicator of value.

FAKE IT TILL YOU MAKE IT (2022) (Figure 9) is a generative art collection and later a book. The artists borrowed from the bubbly language and pastel-coloured aesthetics of Instagram text graphics to scrutinize the promotion of wellness, self-care, and confidence on social media. Every image featured in the book was generated with a custom, JavaScript-based algorithm, written by the artist. The book acts as an ode to the Art Blocks Curated collection, showcasing all 700 editions in glossy detail along with essays, Discord logs, poetry written with the output, source code, feature analysis, and a carefully curated selection of large-format spreads. Here, content has transcended the photographic and the documentary as an access point to the cultural archive, and the work draws on direct aesthetics and strategies of content, while exploring the ubiquity of self-care, motivational quotes, and positive affirmations online. The work was longlisted for the Lumen Prize in generative art.

Figure 9: FAKE IT TILL YOU MAKE IT

Online, these types of posts make their algorithmic way through networked feeds fuelled by attention and engagement through likes, comments, and shares. “’What do I believe?’ becomes ‘What do I want to appear to believe?’ Fake it till you make it! Maybe your dream life lives here: In a digital, fantasy world, where the algorithm plays god and loving yourself feels like looking into the light of your screen” (Man, "FAKE IT TILL YOU MAKE IT"). Man suggests that rather than being seen as an auxiliary act, posting could be reframed as an experimental practice in itself - “If the medium is the message, the ‘new media’ most artists are experimenting with today is the online presentation of self” (Man, "The Artist Is Online").

Man’s 2023 piece Dress Code exemplifies greatly the re-evaluation of values and synthesis of positive and negative adaptation to valorised cultural tradition via content. The generative patchwork piece uses language sourced from Gucci’s Instagram captions from the past 12 years, including over 200 adjectives from the fashion house’s social media (Figure 10). Pulling from this archive of words and a curated set of Unicode symbols, the program renders a randomly chosen element repeatedly in each patch, mimicking a method used for printed fabric. These signifiers combine in one’s wardrobe to perform their shifting identity. The work examines how platforms influence fashion, the entanglement of fashion brands with social media platforms, and the capacity of content, as a source and a logic, to scrutinize the processes it serves.

Figure 10: Maya Man, Dress Code (2023), Instagram post, 24 July 2023.

Conclusion

This article considered contemporary feminist performance and internet art practices in the framework of Boris Groys’s theory of innovation. By analysing works by Signe Pierce, Molly Soda, and Maya Man as exemplifying the process of cultural innovation theorised by Groys, the article demonstrated how these artists use social media platforms, self-documentation, and archival behaviours to create works which cross the boundary between the profane realm of networked social platforms and the archival realm of institutionalised cultural memory.

Through the lens of Groys’s theory, innovation in contemporary feminist internet art allows the emergence of ‘content value’ through the re-valuation of values. This process is realised through positive adaptation to the established art historical traditions at the intersection of feminist performance and electronic technology, and negative adaptation where the transitory and everyday space of the feed is used to expand on these traditions via the use of content as artistic method.

‘Content value’ emerges as a critical feature of contemporary digital art, suggesting a blurring of the line between art and content, and introducing a new type of value in cultural production, drawing on the heterogeneity of online space. ‘Content value’ is shaped as part of a process of cultural innovation founded in the exchange between the valorised cultural archives and the vulgar networked social technologies. The practices discussed use content as a key feature of practice, while at the same time they extensively draw on archival logics and behaviours.

This article proposed ‘content value’ as a vehicle of the process of innovation in digital feminist performative practice. The introduction of a new value doesn’t necessarily effectively challenge the traditional hierarchical structures of art but speaks to the emergence of new facets of contemporary art in the context of ubiquitous social networking. Further consideration of these processes needs to address the deeper influence of digital platforms and algorithmic politics, and to consider ‘content value’ in a broader artistic context.

Content-as-art and ‘content value’ speak to the synthesis of positive and negative adaptation of contemporary works to the archive and the ways in which they introduce new values into culture via the vernacular of the Internet. At the same time, ‘content value’ reflects the exchange between the archive and the feed more broadly – while social media imitates the behaviours of the archive, the archive, or the institution of art, continually adopts the logics, metrics, and values of social media. Ultimately, the archive as the structural condition of innovation is preserved via its capacity for transformation and adoption of the new.


Notes

  1. The concept of the new as an extra-cultural other is closely tied to modernist understandings of artistic innovation as occurring in opposition to and outside of established cultural norms. For example, this could be traced back to the 20th century avant-garde positioning itself against the mainstream cultural order, or later to the work of The Frankfurt School, particularly Adorno and Horkheimer, arguing that true innovation exists outside the realm of the commodified cultural product.
  2. In this context and following Groys’s theory, ‘the archive’ is used interchangeably with ‘the museum.’

Works cited

Brodsky, Judith K. Dismantling the Patriarchy, Bit by Bit Art, Feminism, and Digital Technology. Bloomsbury Visual Arts, 2022.

Bucknell, Alice, and Signe Pierce. “Digital Streams of an Uploadable Consciousness: Stories 2016-2019 - Signe Pierce and Alice Bucknell in Conversation." Mousse Magazine, 24 June 2019, https://www.moussemagazine.it/magazine/digital-streams-uploadable-consciousness-stories-2016-2019-signe-pierce-alice-bucknell-2019.

Burgess, Jean. YouTube Online Video and Participatory Culture / Jean Burgess, Joshua Green. Second edition., Polity Press, 2018.

Dekker, Annet. "What We Talk about When We Talk about Online Cultures." Archiving and Questioning Immateriality: Proceedings of the 5th Computer Art Congress [CAC.5], edited by Everardo Reyes-García et al., 2016, pp. 145–63.

Dekker, Annet, and Gabriella Giannachi. "The Qualities and Significance of Documentation." MAP - Media | Archive | Performance, vol. 12, 2022. mediarep.org, https://doi.org/10.25969/mediarep/22285.

Duchamp, Marcel. "L.H.O.O.Q. or La Joconde." Norton Simon Museum, https://www.nortonsimon.org/art/. Accessed 20 Aug. 2024.

"First Look: Amalia Ulman—Excellences & Perfections". Rhizome, 20 Oct. 2014, https://rhizome.org/editorial/2014/oct/20/first-look-amalia-ulmanexcellences-perfections/

Groys, Boris. On The New. Verso, 2014.

Harbison, Isobel. Performing Image. MIT Press, 2019.

Johnston, Anabelle, and Maya Man. "Scroll as Textile: An Interview with Maya Man." Syntax, https://syntaxmag.online/1/Scroll-As-Textile. Accessed 13 May 2024.

Krauss, Rosalind. "Video : The Aesthetics of Narcissism." October, vol. 1, 1976, pp. 50–64.

Lehner, Ace, editor. Self-Representation in an Expanded Field: From Self-Portraiture to Selfie, Contemporary Art in the Social Media Age. MDPI - Multidisciplinary Digital Publishing Institute, 2021. library.oapen.org, https://doi.org/10.3390/books978-3-03897-565-6.

Lijster, Thijs. "The Future of the New: An Interview with Boris Groys." Krisis: Journal of Contemporary Philosophy, no. issue 1: Data Activism, 2018, https://archive.krisis.eu/the-future-of-the-new-an-interview-with-boris-groys/.

---, editor. The Future of the New: Artistic Innovation in Times of Social Acceleration. Valiz, 2018.

Man, Maya. "FAKE IT TILL YOU MAKE IT." Heavy Manners Library, https://shop.heavymannerslibrary.com/products/fake-it-till-you-make-it-maya-man. Accessed 14 May 2024.

---. "The Artist Is Online." Outland, 2 Apr. 2024, https://outland.art/maya-man-digital-performance-art/.

Moss, Ceci. "Internet Explorers." Mass Effect: Art and the Internet in the Twenty-First Century, edited by Lauren Cornell and Ed Halter, MIT Press, 2015, pp. 147–57. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ed/detail.action?docID=4093117.

"Net Aart Anthology: VVEBCAM." Net Art Anthology: VVEBCAM, 27 Oct. 2016, https://anthology.rhizome.org/vvebcam.

"Net Art Anthology: Scandalishious." Net Art Anthology: Scandalishious, 27 Oct. 2016, https://anthology.rhizome.org/scandalishious.

Proulx, Mikhel. "Protocol and Performativity: Queer Selfies and the Coding of Online Identity." Performance Research: A Journal of the Performing Arts, vol. 21, no. 5, 2016, pp. 114–18.

"Signe Pierce." Annka Kultys Gallery, https://www.annkakultys.com/artists/signe-pierce/. Accessed 14 May 2024.

"Signe Pierce: Digital Streams of an Uploadable Consciousness: Series 2016-2019." Annka Kultys Gallery. 6 July 2019, https://www.annkakultys.com/exhibitions/digital-streams-of-an-uploadable-consciousness/

Soda, Molly. ☆ ♪ Singing Alone in My Room ☆ ♬’. (¯`*•.¸,¤°´✿.。.:* 𝐈’м ᶤℕ 𝐥𝐨𝓥ᗴ ŴιTh 𝕄𝕐 𝓹𝓸𝐫т@𝐥 *.:。.✿`°¤,¸.•*´¯), 20 Feb. 2023, https://mollysoda.substack.com/p/singing-alone-in-my-room

Soulellis, Paul. "The Post as Medium." The Art Happens Here: Net Art Anthology, Edited by Michael Connor with Aria Dean and Dragan Espenschied, RHIZOME, 2019.

Steinberg, Monica. "(Im)Personal Matters: Intimate Strangers and Affective Market Economies." Oxford Art Journal, vol. 42, no. 1, Mar. 2019, pp. 45–67. Silverchair, https://doi.org/10.1093/oxartj/kcy026.

Tromble, Meredith. The Art and Films of Lynn Hershman Leeson: Secret Agents, Private I. University of California Press, 2005.

Virtual Studio Visit: Molly Soda. 2024, https://outland.art/molly-soda/.

Biography

Bilyana Palankasova is a researcher and curator, currently a PhD candidate in Information Studies at the University of Glasgow. Her doctoral work considers the role of festivals in the history of digital art and looks at curatorial methods alongside institutional transformations. Bilyana studied History of Art & Digital Media at the University of Glasgow, Modern & Contemporary Art at The University of Edinburgh, and Curatorial Practice at The Glasgow School of Art.

Edoardo Biscossi

Platform Pragmatics

Platform Pragmatics

Labour, speculation and self-reflexivity in technologically mediated content economies

Abstract

This article proposes platform pragmatics as a framework for understanding collective behaviour and forms of labour within platform ecosystems. It contributes to the field of platform criticism by problematising a certain view of users as passive victims of surveillance and algorithmic governmentality. The main argument is developed by thinking through the production of content and forms by users, and their circulation through computational logic and affective contagion. Through some illustrative cases and analyses of cultural habits, the article addresses the political and aesthetic configuration of these forms of production — not only of content/forms, but also of culture and subjectivity. This is explored by thinking through three themes: the subsumption of creativity and opportunism in platform economies; the mobilisation of speculative temporalities not only in computation but also across user practices; and the generalisation of self-reflexivity as a feminised cultural behaviour and aesthetic mode. Finally, I propose to understand platform pragmatics as a mode of subaltern power, that might be alien to traditional political reason, but precisely because of this needs to be grappled with through inventive cultural and social criticism.

Introduction

This article proposes platform pragmatics as a framework for understanding collective behaviour and forms of labour within platform environments. The main argument is developed by thinking through the production of content and forms by platform users and creators, and how they circulate through dynamics of imitation and virality. It contributes to the field of platform criticism by pushing against a certain view of platform users as passive victims of surveillance and algorithmic governmentality, while also problematising the putatively autonomous position of the User as the universal agent of technology. I will start by thematising how a certain disposition to performance is increasingly important for a widening range of jobs, especially those connected to platform and attention economies. The article draws from digital media theory to understand content and forms as produced through a mesh of computational logic and affective contagion. Then, it considers the political and aesthetic configuration of this form of production — not only of content/forms, but also of culture and subjectivity — through some illustrative cases and analyses of cultural habits. This configuration is explored by thinking through three themes: the subsumption of creativity and opportunism in platform economies; the mobilisation of speculative temporalities not only in computation but also across user practices; and the generalisation of self-reflexivity as a feminised cultural behaviour and aesthetic mode. Following these threads, I finally propose to understand platform pragmatics as a mode of subaltern power, that might be alien to traditional political reason, but precisely because of this needs to be grappled with through inventive cultural and social criticism.

Platform mediated content economies

Over the last decade, the proprietary platform technologies served to us by Big Tech have become key infrastructures of social life, of work and research, of cultural imaginaries and collective action. Although spheres of techno-cultural diversity still exist and thrive within the “platform society” (van Dijck et al) — the “Corporate Platform Complex” (Terranova “After the Internet”) is deeply embedded in the background of everyday life, in a baroque mesh of networked user profiles, data interfaces and affective flows. Platforms mediate sociality even when people or organisations actively withdraw from them — see the case of Transmediale opting out of social media. Attending to them is necessary to those who champion their ethosjust as much as to those who critique it. Platforms mediate the art biennale and its boycott, the university’s neoliberal policies and its occupation by students.

Given their growing pervasiveness, a diverse body of research has developed criticisms of digital platforms. These are now widely understood as centralised architectures exercising integrated control over networked users’ interactions (Bratton), strategically leveraging their infrastructural position to harvest data from these networks (van Dijck et al). A significant object of critique has been the models by which platforms valorise the data gathered from social interaction (Srnicek) and how these models function through impersonal and cybernetic modes of power grounded in protocol and control (Galloway; Hui; Williams). Specifically, platform control operates by anticipating, modelling and influencing behaviour through statistical patterning and “algorithmic governmentality” (Rouvroy and Berns).

In the platform mediated social, economic survival requires at least the adoption of platform services, while access to the pleasures of sociality, consumption and aesthetic enjoyment often necessitates a willing self-investment in their logic. Our desires for traveling, for cultivating interests, even for sexual encounters, are strategically channelled through platform models of attention capture and networked sociality. Sustaining a working life that fulfils one’s ambitions often requires platforms to mediate our connections, reputation, if not direct earnings. However, this doesn’t mean that our proximity to networked computation is only forced by social necessities. For many of us, interaction with media and computation can be a pleasurable and interesting experience in itself, something we actively seek out.

For these and other reasons, subjective investment in the logic of the platform complex keeps the collective body/mind at work around the clock, as a creative production unit: scripting narratives, producing content, devising promotional strategies, developing networks of contacts, partners, supporters, cultivating audiences and hopefully expanding them, ‘hacking growth’. Personally, working freelance without possessing any particularly scarce technical skill, keeping my feet in more than one industry (‘at the intersection’ as the saying goes), while trying to do research in a way that is economically sustainable, requires me to mobilise all my inventiveness and opportunism — always keeping an eye on platform dynamics.

But such demands are not a cross to bear only for ‘cognitive’ or ‘knowledge’ workers, researchers or creatives. This is not only because all labour involves knowledge, cognitive activity, and has at least some immaterial component — as highlighted by autonomist Marxism — but also because the production of content and forms has become important for a widening range of professions. Running a popular Substack, operating fluently as a digital creator of some sort, or even just having a good social media presence, all function as good indicators of the entrepreneurial disposition that is usually required for white-collar or creative careers. But a similar disposition towards content and platform presence is increasingly important for professions that are not traditionally associated with performance or self-spectacle.

Companies increasingly understand their employees as content publishers and even influencers, capable of generating value for them not only through direct labour time, but also through their free engagement with content/forms on digital platforms, which is something they can also be trained and encouraged to do. Inevitably, “employee-generated content” becomes a management category and a consultancy genre (Goodall). Unsurprisingly, Amazon is an early pioneer in this: from 2018 to 2022, the company had reportedly set up an internal ambassador scheme, paying employees for positively representing the company on social media — especially regarding the controversial issue of working conditions (Suciu). But besides the interests of their employers, content production engages workers first and foremost as self-entrepreneurs. In my PhD research on platform labour, I observed how gig workers often supplement scarce or unreliable earnings through content creation and other platform mediated side-hustles. For instance, online content around delivery work is often produced by workers themselves, in a proliferation of formats including tutorials, vlogs, newsletters, challenges, reaction videos, forums and group chats. People usually try to grow a community of followers, promoting content about their work with practical or entertainment purposes, sometimes even selling gadgets or coaching services (Biscossi). The capacity to create media forms, assigned by McKenzie Wark to the “hacker class” (Wark), appears increasingly essential to the working class as a whole.

Because of the ease of access to digital marketplaces, the practice of side-hustling, historically necessary to precarious workers for making ends meet, becomes more and more generalised. Precarity is reframed as a chance for empowerment, which resonates with a general need for opportunities in the face of economic vulnerability, but also with a certain desire for self- realisation and liberation from the drudgery of day-jobs. Platforms democratise entrepreneurial hustle by enabling anybody anywhere to access extremely dynamic content marketplaces, connecting with audiences and finding inventive ways to monetise attention, to live off one’s previously un-expressed talent. After all, a key promise of the platform economy is that of connecting self-expression to monetisation, potentially freeing oneself from the dread of salaried work by pursuing their passion. [1]

Platform mediated content economies seem to integrate the creativity of the “hacker class” (Wark) with versatility of the “entreprecariat” (Lorusso), the hustle of gig workers (Woodcock and Graham) and the general opportunism of post-Fordist labour (Virno). This is important to understand the circulation and mutation of content and forms, because it means that most people trying to go viral are not necessarily acting on some innate desire for self- expression or popularity, nor are they after any influencer or ‘creative director’ lifestyle. Most likely, they’re either not earning enough or just don’t like their job. Attention and content economies express certain cultural shifts that accompany mutations in production. I believe these need to be jointly addressed in order to understand emergent forms of labour and subjectivity.

Content / forms in a “techno-social” milieu

Content and forms do not just shape each other, but also unfold through the turbulence and complexity of platform environments. If creative production entails the transformation of thought into proposals, personal traits into assets, and life into content, the rendering of its forms happens through platform mediation, which we can understand as an assemblage of interfaces, language, affects, attention and virality, behavioural vectors and algorithmic learners.

Mediation here does not simply mean transparent communication between discrete actors, but rather — in line with a long tradition of media theory (Galloway, Thacker and Wark; Kember and Zylinska) — a complex process that is at once social, cultural, psychic and technical. The production and circulation not only of content, but of culture and subjectivity, is articulated through a logic that is increasingly computational, destabilising any separation between the social — as the space of politics — and the medial, as the space of leisure and culture (Sundaram). This resonates with Tiziana Terranova’s “techno-social hypothesis”, which “concerns the idea that, over the last three decades or so, the technological and the social have become thoroughly enmeshed with each other”, to the point that digital computational networks no longer simply combine a natural and technical milieu, but rather generate “a directly techno-social one” which is both medium and milieu (Terranova and Sundaram). Here, technical systems do not simply support social interaction, but make it digitally available to the computational architectures that mediate it, creating the conditions of communication through which content/forms emerge.

In terms of how they emerge, in this techno-social milieu, content and forms are rendered and experienced less by linguistic representation, and increasingly through algorithmic synthesis. This is evident, for instance, in how users adapt their content practices to algorithmic logic for visibility purposes. The changing grammars of content circulation are a clear product of this. On social media platforms, content creators work with combinations of typified forms; elements of content that can be imitated and reproduced by other users, constituting trends or templates that spread through virality. It could be a particular move, sound, catchphrase, visual element or graphic animation, that circulates through imitation, and through this imitation produces difference and new invention. In fact, these are not finished pieces of content that are re-shared as-is, but viral components of content that spread and mutate through the logic contagion.

By this process, the visual cultures of the attention economy have developed according to what Leaver, Highfield and Abidin call “templatability” (Leaver et al), an algorithmically- driven process shaping the grammars of platform users. Here, platform aesthetics take form between the affordances of algorithms and their appropriation by users, who bend their performance and internalise the algorithmic gaze in order to take advantage of it (Portanova “Camera eats first”). Content appears as the surface of the cultural plane field, whose organisation increasingly takes place at the more fundamental level of computational mediation.

The political aesthetics of content / forms

Under Big Tech’s corporate oligopoly, the production and circulation of content/forms across the social might appear firmly subjugated to algorithmic governmentality (Rouvroy and Berns). As suggested by many accounts of technological power, from Tiqqun to Bernard Stiegler, these conditions dramatically limit the space of political and aesthetic possibility.[2] We can find one of the most influential critiques of platform control in Shoshana Zuboff’s work on surveillance capitalism as a new regime of accumulation grounded on the extraction of “data exhaust” from social relations (Zuboff). In this critique, behavioural modification and commodification are fundamental to accumulation and power. Surveillance ubiquitously records, predicts and steers everyday practices in a way that surpasses the anticipatory conformity of panoptical surveillance, where subjects chose submission by fear of compulsion. Under surveillance capitalism, “agency […] is gradually submerged into a new kind of automaticity – a lived experience of pure stimulus-response” and "conformity […] disappears into the mechanical order of things and bodies” (Zuboff 82). From this perspective, the social might appear inert and disempowered under the transcendental control of Big Tech.[3]

Thus, the platform milieu appears as a crucial site of aesthetic negotiation and power struggles. Ravi Sundaram sees the “new urban information ecology” as a “remarkable infrastructure of agility and possibility” with enough expressive and associative power to exceed complete capture by platform logic (6). It is precisely this tension between corporate calculation, affective coordination and aesthetic expression that produces the rhythm of techno-social life.[4] Here, the techno-social body appears not so much as a homogenous “silent 4 majority” (Baudrillard), but rather as a libidinal mesh of users’ desire and corporate interests, affects and computation, radically open to imitation and affective contagion. Collective intelligence and creativity never seem to realise any complete autonomy from control, and yet they are never fully subjugated to corporate accumulation.

The interesting question then becomes: within the techno-social milieu, what forms of individuation take shape, and how can they be studied through the lens of content/form? This theme can be explored through the initial question of labour in platform mediated economies, looking at everyday practices of content production/consumption, work and research. How do forms circulate across user interfaces, bedrooms/offices/studios/stages and proprietary computation? What kind of subordinate subjectivation takes form through collective inventiveness and contagion? How do these pragmatics interpret and contaminate platform logic?

The production of content/forms in platform economies foregrounds at least two dynamics that characterise labour in the techno-social milieu, which I will now turn to: one is linked to speculation as a constitutive element not only of computational architectures, but also of everyday user practices; the other highlights self-reflexivity and performance, especially as culturally feminised behaviours, as central to the techno-political imaginaries of contemporary labour.

Speculative interfaces

This section argues that platform mediated economies mobilise speculative practices as increasingly central to flexible labour.

In its common use, for instance in financial investment, speculation entails a set of calculative techniques for trying to manage time in the form of uncertainty, anticipating the future while recursively producing it (Esposito).[5] Through generalised speculation, the financialisation of the economy twists time in such a way that anticipation, instability and contingency become key to the performance of power in the present. However this is not entirely new. Already the planning aspirations of the 20th century — the ideological battle between free market economics and socialist planning — focus on prediction as a critical site of power. Economic modelling relies precisely on this power to calculate and represent complexity in order to tame it, bringing a desired scenario into existence through the joint action of prediction and speculation (Medialab Matadero).

Today’s control apparatuses — as we’ve seen — deploy statistical prediction and hypothesis- making at scale, through algorithmic governmentality. The personalised anticipation of wants and desires is now a standard feature of most software services, from algorithmic recommendations to artificially intelligent UX design — by which my phone is increasingly capable of anticipating what I am about to do with it. In the commercial realm, the popularity of foresight consultancies and speculative design studios testifies a certain appetite for accelerating futures into existence. The recent hype around Meta’s project of the Metaverse demonstrates the power of large corporations to create almost entire economic sectors — with very real investments into virtual real estate (Biscossi & Campani) — simply through speculative proposals for vague visions of the future.

Across culture, there seems to be a widespread celebration of potentiality as an almost tangible force. This is reflected for instance in the cultural virality of positive affirmations and “manifesting” (Burton); while at a more intellectual level, the growing interest in speculative practices and fabulation within art and critical practice might indicate an aesthetic tendency towards seizing the virtuality that many see as latent in the real.

I would like to argue that speculation is not only something that operates from above through corporate and governmental infrastructures, but rather innervates the techno-social body also from below. In fact, various small-scale speculative interfaces permeate contemporary reputation and attention economies. These seem designed for the constant guessing of what the near future will look like. Within the highly metricated space of content platforms, users try to predict what forms and content will gain higher visibility and virality. Here, their ability to forecast trends, to embody and reproduce them, assumes uncannily financial connotations.

With the increasing templatability of content and its deconstruction into re-composable trends, striking the right combination of content and forms, catalysing collective affect as a vehicle of virality, can bring about very significant material opportunities. Coming out of the Covid-19 pandemic, when TikTok was popularised among the European public, the platform established itself as a key promotional media for small businesses, autonomous workers, cultural workers and diversely underemployed populations. The context of Naples — where I lived at the time — presented a mix of affective expressivity and economic precarity that made it the epicentre of an emerging media vernacular, which crucially intersected with a wave of “touristification” investing many Southern European cities (Esposito). The performance of a certain ‘southern’ identity, the enactment of certain tropes and stereotypes, the display of visual and sonic elements tightly linked to local culture, demonstrated a powerful affective charge that resonated with platform publics well beyond the local dimension. The most famous case is that of a long-time employee at a popular Neapolitan deli, who accidentally became a TikTok sensation after being filmed by some customers during his humorous sandwich preparation. While his employers felt that Donato’s growing engagement with content creation was disrupting the shop, another entrepreneur stepped in and offered to fund the opening his very own place. Themed around Donato’s online character and catchphrases, “Con Mollica o Senza?” has since become not only an attraction in Naples, where people queue around the block for a sandwich and a video, but also a global sensation, with shops in different cities and brand collaborations worldwide (Abazia; Glassberg).

For someone trying to promote their activity without capital to invest, the virality of content/ forms provides access to volumes of exposure and circulation that they wouldn’t be able to generate through traditional promotional tactics. This opportunity arises via the networks of affective contagion that people access through digital platform’s speculative interfaces. And speculative interfaces require a speculative disposition. Am I going to be able to strike the right combination of visuals, sound, lingo and overall vibe? Am I sufficiently in tune with algorithmic cultures to stay on top of constantly shifting platform vernaculars?

Clearly this logic of prediction and performance is not limited to social media forms. Work in design, communications and commercial creativity is also distinctly geared towards the constant development and testing of aesthetic and consumer trends. Working in the knowledge industries, maintaining a research or artistic practice, similarly requires a certain engagement in speculative practices.[6] Where is institutional funding headed? Is it worth still investing time and labour into the AI bubble, or has it reached its peak? What will be trending next year at Transmediale? What I am trying to say is that speculation appears as a pervasive practice, almost a basic requirement for surviving in the precarity of contemporary economies.

Armen Avanessian & Suhail Malik talk about a “speculative time complex” brought about by a “post-contemporary” condition where the linear direction of time has changed and the future appears — at least politically and aesthetically — before the present, so that speculation and futurity influence the present before it actually happens (Avanessian & Malik). This speculative temporality becomes productive in everyday life precisely through the rendering of content and forms by digital creators, trendsetters, artists, managers, researchers, gig workers and other platform users. In the rhythmic complexity of techno-social life, human and nonhuman speculative capacities integrate in the key tension between affective contagion, statistical calculation and opportunistic inventiveness.

Assets and labour

As noted by many scholars, getting by in platform-ed economies depends not only on the direct commodification of labour time, but also increasingly on what Kean Birch and Fabian Muniesa call “assetisation” (Birch & Muniesa); the opening of one’s productive capacities to valuation on digital marketplaces. Obviously these assets do not constitute a concrete portfolio: they exist as undetermined virtuality until one finds ways to actualise them in specific enactments of exchange — it's all up to me, it's my human capital.

In my research on platform labour, I had the chance to observe the inventive practices of many full-time platform workers (Biscossi).[7] In a particular case, a young woman who had left a job at the airport to work as a rider — seeking more autonomy over her work — was not only active on several delivery platforms at once, but also constantly creating videos about her delivery shifts to disseminate on social media and content platforms.[8] She would comment on her job in diary-style vlogs, engage with social media challenges and trends, or share practical advice for other couriers. Here, content creation becomes a way to valorise the significant amount of unpaid waiting time that comes with delivery work, by channeling it into other platforms’ content-based earning models. Her time, creativity and willingness to communicate, her body and its capacity to perform, constituted her assets, which could be simultaneously plugged into multiple virtual marketplaces, appropriately rendered through content forms. She would remain available for delivery gigs, while also creating videos for her followers, and trying to catch the right trends and content templates on social media.

It is now interesting to think about how, by this constant speculative effort and this process of assetisation, living labour is subject to a condition of exposure, affectability and necessity of performance. As highlighted by Kylie Jarrett, this entails the development of a self-reflexive sensibility, by which workers experience themselves through the gaze and logic of platform valorisation (Jarrett).

Self-reflexivity and ‘girlhood’

Self-reflexivity appears as a key characteristic of contemporary labour; a hyper-awareness of being watched, by which one learns to self-observe from the outside. It appears particularly fundamental to platform economies, where users-workers are constantly exposed to their own valuation and sorting through the algorithmic gaze of digital metrics. Crucially, Jarrett notes how this constant performance of availability and desirability is a historically gendered cultural behaviour. In fact, the vulnerability of this self-reflexive condition is in line with a historical feminisation of labour — understood as a process that both signifies and subtends its exploitation and vulnerability (Haraway; Jarrett).

Tiqqun’s famous theorisation of the “Young-Girl” as a paradigmatic condition of labour subjectivity in the early 21st century sees her as “the being that no longer has any intimacy with herself except as value, and whose every activity, in every detail, is directed to self- valorization” (“Preliminary Materials for a Theory of the Young-Girl” 18). Tiqqun are careful to clarify that the Young-Girl is not a gendered concept nor necessarily female. Of course, this condition can also apply to men, only as long as they are emptied of all the autonomy and capacity to struggle of the male industrial worker: girlhood here means being reduced to a mere vessel of capital. However, beyond Tiqqun’s dismissive formulation, the Young-Girl has been mobilised in feminist cultural studies to understand the subaltern agency of this vulnerable condition. In fact, it is precisely “the contradictions that the ‘Young Girl’ exists within – both object and subject; both active and passive; both observed and watchful” that “offer a way of understanding the absorption of life into labor” (Jarrett).

This constant self-reflection and its duplicity — activity/passivity, observed/watchful — is clearly at play within platform and content economies. Here, most work is about being subjected to similar contradictions: competing for visibility while at the same time trying to maintain some tactical privacy. Taina Bucher argued, in her brilliant reading of Facebook’s EdgeRank algorithm as a reversal of Foucault’s framework of panoptic surveillance, that the algorithmic architecture of digital platforms establishes not a mechanism of permanent visibility, but rather a “threat of invisibility” as constitutive of participatory subjectivities (Bucher). People quickly learn that the monitoring and evaluation of their performance determines their access to opportunities, and organically internalise this logic through behavioural reward mechanisms. This is true for delivery couriers deciding whether or not to reject a poorly paid order, and equally for Instagram users deciding to post certain types of content — for instance selfies — in order to be rewarded with algorithmic visibility. At the same time, it is often in their interest to maintain some degree of privacy and tactical invisibility from the managerial gaze of the platform. If algorithmic visibility grants increased access to opportunities and pleasure, tactical privacy enables one to retain some autonomy and possibility for indiscipline. It is in this double dynamic that platform mediation enforces self-reflexivity as a feminised cultural behaviour.

Alex Quicho’s recent intervention into girlhood discourse asks what makes the “girl” such a viral figure for online subjectivity. Drawing from Andrea Long Chu’s idea of being females as becoming vessels for someone else’s desire, she understands the girl not through victimhood, but as a mode of subaltern power articulated through artificiality, proposing this as a model for platform survival, for learning “how to move with the trap" in order to stay clear of complete capture (Quicho). One of the ways in which ‘the girl’ seems to do this is through a certain mobilisation of aesthetics. Sianne Ngai proposes the zany, the cute and the interesting as the paradigmatic aesthetic categories of the technologically mediated, performance-driven world of late capitalism. She argues that “the best explanation for why the zany, the interesting, and the cute are our most pervasive and significant categories is that they are about the increasingly intertwined ways in which late capitalist subjects labor, communicate and consume” (Ngai 238). In comparison with the traditional aesthetic categories of the beautiful and the sublime, the zany, the cute and the interesting represent weak forms and soft powers. These are clearly mobilised in online content production and circulation, as ways of capturing the gaze of both other users and algorithms.

These aesthetic modes are about the need to constantly maintain attention and sociality, and about the subsumption of subjectivity and creativity into exchange, which also highlights a certain loss of tension between leisure and work, or culture and commodity. Performing cuteness or zaniness online can be read as counter-hegemonic tactics for economic survival and for pragmatically pursuing pleasure under platform control.

Platform pragmatics

Learning to engage with content/forms in the platform environment requires users not only to think according to a computational logic, but to internalise algorithmic reasoning, in order to act on their needs and wants. By this process, one is inevitably produced simultaneously as a subject (User) and object (used) of technology. I propose to understand this entanglement with the platform milieu, as a technology of the self, through the idea of platform pragmatics.

I am drawing the idea of pragmatics from the work of Veronica Gago on what she calls “neoliberalism from below”. Looking at Latin America, Gago argues that what enabled neoliberalism to persist beyond its crisis of political legitimacy was its integration with “popular pragmatics”. This situates neoliberal subjectivation at the conjuncture between an exploitative rationality “from above” and a popular rationality "from below”: it does not determine nor dominate, but is rather assimilated and distorted by those who are assumed to be simply victims to it. By this conjunctural mode of subjectivation, neoliberal rationality becomes immanent to what Gago calls “vitalist pragmatics”: practices and ways of reasoning by which subaltern classes adapt to life under neoliberal “baroque economies” (Gago). Very significantly Gago shows how these pragmatics emerge from vulnerable and feminised labour, after the disintegration of the male paternal figure of the salaried worker. Gago interestingly draws from Paolo Virno’s idea of “opportunism”. While Virno describes how inpost-Fordist labour opportunism has been put to work as a “bad sentiment”, signifying corruption and cynical acceptance of domination, it can also be understood in its structural and non-moralistic sense, as a mass emotion and a mode of being that is rooted in a social reality characterised by unexpectedness, chronic instability and innovation. “Opportunists are those who confront a flow of ever-interchangeable possibilities, making themselves available to the greater number of these, yielding to the nearest one, and then quickly swerving from one to another. [...] It is a question of a sensitivity sharpened by the changeable chances, a familiarity with the kaleidoscope of opportunities, an intimate relationship with the possible” (Virno 86).

From this perspective, the masses do not necessarily appear as passive subjects of surveillance or neutralised silent majorities, but as a collective social body/brain that might be alien to traditional political reason, but that clearly expresses subaltern power through pragmatics and speculation. This shows how the political question of content/forms in the techno-social milieu cannot be reduced to a dispute between subjugation and autonomy. Studying platform pragmatics as a mode of subjectivation, we can understand platform economies not as a homogenous or totalising apparatus operating ‘from above’, but rather as a conjunctural space grounded in the plurality and indeterminacy of everyday content practices, and their ambiguous interpretations of platform logic.

Through the observation of speculative and self-reflexive practices, we also see how pragmatic intelligence and creativity entail the internalisation and appropriation of an alien logic — that of computation but also that of capital — producing a plastic, artificial subjective mode, for opportunistic attunement to an always unnatural, inhuman milieu.

Lastly, this framework points to the question of who the subject of contemporary technological ecosystems really is. I suggest that the speculative and self-reflexive character of platform pragmatics undermines the universality of the User as the received subject of media technologies — self-possessed Man, master of the instrument and transparent subject of volition. In contrast with this fantasy, the legible subject of platform pragmatics appears radically affectable, feminised and open to outer determination, troubling a cornerstone of the master discourse around humanity and technology, by being — at once — user and used.

Notes

  1. Interestingly, analysing the online aesthetics of “hustle culture”, art critic Brad Troemel individuates 1 a key shift in the post-pandemic period; whereby the meaning of ‘hustle’ as never-ending grind through many part-time gigs — the ethos of the gig economy — mutates into hustle as ‘scam’, the logic of recruiting followers and growing a community in order to promote investments and spread propaganda. Scam culture follows the promise of achieving passive income through confidence scam models, which was popularised during the 2021 NFT bubble and the subsequent online proliferation of investment recruiting, coaching communities and other forms of pyramid schemes (Troemel).
  2. Even seeming irregularities fail to destabilise a system that is already predicated on constant crisis (Chun), error and instability (Majaca & Parisi), especially given the power of platforms control to modulate turbulence and “metabolise contingency into power itself” (Williams).
  3. Such a scenario somewhat echoes Baudrillard’s famous thesis on “the end of the social”, whereby the 3 emergence of informational media networks allows a neutralisation of the social as a political field, producing the “silent majorities” of mass culture as a mere “simulation of the social” (Baudrillard).
  4. According to Stamatia Portanova “the complexity of rhythm resides in the problematic coexistence between […] the regularity of measurement and the spontaneity of sensation, the abstraction of metrics and the experience of complexity” (“Whose Time Is It?” 44).
  5. Financial derivatives use the anticipated future price of an asset, and its associated degree of risk, to draw profits against present prices, operationalising this uncertainty through a series of financial “futures” — like swaps, options and forwards — primarily dealing “with the links that exist between the way the present sees the future and the way the future actually turns out” (Esposito 2).
  6. Benjamin Noys describes a certain paradox of creativity in relation to artistic self-valorisation: “on the one hand, the artist is the most capitalist subject, the one who subjects themselves to value extraction willingly and creatively, who prefigures the dominant trend lines of contemporary capitalism […] On the other hand, the artist is the least capitalist subject, the one who resists value extraction through an alternative and excessive self-valorisation that can never be contained by capitalism” (1).
  7. Gig workers understand the importance of being early adopters of a successful platform: arriving before platforms’ over-hiring practices determine an excess of workers, scarcity of jobs and lowering of fees.
  8. https://www.youtube.com/c/AtlantaDelivers

Works cited

Abazia, Francesco. “In Naples TikTok Became a Reality Show.” Nss Magazine, 21 Mar. 2023, https://www.nssmag.com/en/article/32592.

Avanessian, Armen, and Suhail Malik. The Time Complex: Post-Contemporary. Jan. 2016. www.academia.edu, https://www.academia.edu/100503758/The_Time_Complex_Post_Contemporary.

Baudrillard, Jean. In the Shadow of the Silent Majorities, or the End of the Social. A K Press Distribution, 1994.

Berardi, Franco Bifo, et al. After the Future. A K Press Distribution, 2011.

Birch, Kean, and Fabian Muniesa. Assetization: Turning Things into Assets in Technoscientific Capitalism. The MIT Press, 2020.

Biscossi, Edoardo. The User and the Used: Platform Mediation, Labour and Pragmatics in the Gig Economy. 2024. University of Naples l’Orientale.

Biscossi, Edoardo, and Cosimo Campani. “Spatial Revolutions and the Seductive Power of Virtual Salvation.” Meta.space. Visions of Space from the Middle Ages to the Digital AgeAge, 1. edition, DISTANZ Verlag, 2023.

Bratton, Benjamin H. The Stack: On Software and Sovereignty. MIT Press, 2016.

Bucher, Taina. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media & Society, vol. 14, no. 7, Nov. 2012, pp. 1164–80. SAGE Journals, https://doi.org/10.1177/1461444812440159.

Burton, Tara Isabella. “Opinion | The Long, Strange History of ‘Manifesting.’” The New York Times, 9 Mar. 2024. NYTimes.com, https://www.nytimes.com/2024/03/09/opinion/manifesting-spirituality-america-reality.html.

Chu, Andrea Long. Females. Verso, 2019.

Chun, Wendy Hui Kyong. “Crisis, Crisis, Crisis, or The Temporality of Networks.” Updating to Remain the Same: Habitual New Media, MITP, 2016, pp. 69–91. IEEE Xplore, https://ieeexplore.ieee.org/document/7580159.

Esposito, Alessandra. “Tourism-Driven Displacement in Naples, Italy.” Land Use Policy, vol.134, Nov. 2023, p. 106919. ScienceDirect, https://doi.org/10.1016/j.landusepol.2023.106919.

Esposito, Elena. The Future of Futures: The Time of Money in Financing and Society. Edward Elgar Publishing, 2011.

Fisher, Mark. Capitalist Realism: Is There No Alternative? Zero Books, 2009.

Gago, Verónica. Neoliberalism from Below: Popular Pragmatics and Baroque Economies. Translated by Liz Mason-deese, Duke Univ Pr, 2017.

Galloway, Alexander R., et al. Excommunication: Three Inquiries in Media and Mediation. University of Chicago Press, 2013.

---. Protocol: How Control Exists after Decentralization. Edited by Roger F. Malina and Sean Cubitt, Illustrated edition, MIT Press, 2006.

Glassberg, Rachel. “A Fond Farewell To The Chaotic Italian Sandwich Man Of TikTok.” The Takeout, 26 July 2022, https://www.thetakeout.com/tiktok-viral-chaotic-italian-sandwich-maker-farewell-1849331317/.

Goodall, Sarah. “How Companies Can Leverage Employee-Generated Social Media Content.” Forbes, 13 Dec. 2022, https://www.forbes.com/sites/forbesbusinesscouncil/2022/12/13/how-companies-can-leverage-employee-generated-social-media-content/?sh=1791eeda3940.

Haraway, Donna. “A Cyborg Manifesto.” Simians, Cyborgs, and Women: The Reinvention of Nature, 1° edition, Routledge, 1991.

Hui, Yuk. “Modulation after Control.” New Formations, vol. 84, Oct. 2015, pp. 74–91. ResearchGate, https://doi.org/10.3898/NewF:84/85.04.2015.

Jarrett, Kylie. Digital Labor. 1. edition, Polity Press, 2022.

Kember, Sarah, and Joanna Zylinska. Life after New Media: Mediation as a Vital Process. MIT Press, 2012.

Leaver, Tama, et al. Instagram: Visual Social Media Cultures. 1. edition, Polity Pr, 2020.

Lorusso, Silvio. Entreprecariat: Everyone Is an Entrepreneur. Nobody Is Safe. Onomatopee, 2019.

Majaca, Antonia, and Luciana Parisi. “The Incomputable and Instrumental Possibility.” E-Flux, no. 77, Nov. 2016, https://www.e-flux.com/journal/77/76322/the-incomputable-and-instrumental-possibility/.

Medialab Matadero. Technocapital Singularities. 2.

Ngai, Sianne. Our Aesthetic Categories: Zany, Cute, Interesting. Reprint edition, Harvard University Press, 2015.

Noys, Benjamin. The Art of Capital: Artistic Identity and the Paradox of Valorisation. www.academia.edu, https://www.academia.edu/689156/The_Art_of_Capital_Artistic_Identity_and_the_Paradox_of_Valorisation. Accessed 2 May 2024.

Portanova, Stamatia. “Camera eats first: Il rito del foodstagramming nella cultura visual contemporanea.” Mediascapes journal, vol. 21, no. 1, 1, July 2023, pp. 264–76.

---. Whose Time Is It?: Asocial Robots, Syncolonialism, and Artificial Chronological Intelligence. Sternberg Pr, 2022.

Quicho, Alex. “Everyone Is a Girl Online.” Wired. www.wired.com, https://www.wired.com/story/girls-online-culture/. Accessed 21 Dec. 2023.

Rouvroy, Antoinette, and Thomas Berns. “Algorithmic governmentality and prospects of emancipation.” Reseaux, translated by Liz Carey-Libbrecht, vol. 177, no. 1, Oct. 2013, pp. 163–96.

Srnicek, Nick. Platform Capitalism. Polity Press, 2016.

Steinbuch, Yaron, and Jesse O’Neill. ‘Timhouthi Chalamet’: Fighter with Terrorists on Hijacked Ship Banned from TikTok after Going Viral as ‘Hot Houthi Pirate.’ 18 Jan. 2024, https://nypost.com/2024/01/18/news/yemeni-fighter-goes-viral-as-hot-houthi-pirate/.

Stiegler, Bernard. “The Most Precious Good in the Era of Social Technologies.” Unlike Us Reader: Social Media Monopolies and Their Alternatives, edited by Geert Lovink, Institute of Network Cultures, 2013, pp. 16–30.

Suciu, Peter. “Amazon Ended Program That Paid Employees To Post Positive Comments.” Forbes, https://www.forbes.com/sites/petersuciu/2022/02/04/amazon-ended-program-that-paid-employees-to-post-positive-comments/. Accessed 30 Apr. 2024.

Sundaram, Ravi. "Post-Postcolonial Sensory Infrastructure." e-flux Journal, no. 64, Apr. 2015, https://www.e-flux.com/journal/64/60858/post-postcolonial-sensory-infrastructure/.

Terranova, Tiziana. After the Internet: Digital Networks between Capital and the Common. Semiotext, 2022.

Terranova, Tiziana, and Ravi Sundaram. “Colonial Infrastructures and Techno-Social Networks.” E-Flux Journal, no. 123, Dec. 2021, https://www.e-flux.com/journal/123/437385/colonial-infrastructures-and-techno-social-networks/.

Tiqqun. Preliminary Materials for a Theory of the Young-Girl. Translated by Ariana Reines, Semiotext, 2012.

---. The Cybernetic Hypothesis. Translated by Robert Hurley, Semiotext, 2020.

Troemel, Brad. The Hustle Report. https://www.patreon.com/posts/hustle-report-35184676. Accessed 30 Apr. 2024.

van Dijck, José, et al. The Platform Society: Public Values in a Connective World. Oxford University Press, 2018.

Virno, Paolo. A Grammar of the Multitude: For an Analysis of Contemporary Forms of Life. Translated by Isabella Bertoletti et al., Semiotext, 2004.

Wark, McKenzie. Capital Is Dead: Is This Something Worse? Verso Books, 2021, p. 208.

Williams, Alex. “Control Societies and Platform Logic.” New Formations: A Journal of Culture/Theory/Politics, vol. 84, no. 84, 2015, pp. 209–27.

Woodcock, Jamie, and Mark Graham. The Gig Economy: A Critical Introduction. 1st edition, Polity, 2020.

Yalcinkaya, Günseli. “E-Girl Influencers Are Trying to Get Gen Z into the Military.” Dazed, 10 Jan. 2023, https://www.dazeddigital.com/life-culture/article/57878/1/the-era-of-military-funded-e-girl-warfare-army-influencers-tiktok.

Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology, no. 30, Apr. 2015, pp. 75–89.

Biography

Edoardo Biscossi is a PhD candidate in the Department of Humanities and Social Sciences at the University of Naples l’Orientale, where he is also a teaching assistant in the Digital Media Theory and Cultural & Postcolonial Studies modules. His research focuses on digital platforms and computational infrastructures from a digital media studies and critical theory perspective. Member of CRiTT (Inter-university Research Center on Transnational Technocultures). Former visiting PhD student at Goldsmiths University of London, where he also completed a theory/practice MA in Digital Media & Critical Computing. Freelance researcher, writer and creative strategist.

Luca Cacini

The Autophagic Mode of Production

The Autophagic Mode of Production

Hacking the Metabolism of AI

Abstract

This article delves into the autophagic nature of generative AI in content production and its implications for cultural and technological landscapes, defined in the paper as technocene. From a broader perspective, it proposes a metabolic characterization of the technocene and explores the idea of how generative AI, such as large language models (LLMs) like ChatGPT and DALL-E, resembles an autophagic organism, akin to the biological processes of self-consumption and self-optimization. The article draws parallels between this process and cybernetics, then evokes the mythological symbol of Ouroboros, reflecting on the integration of opposites and the shadow phenomena in LLMs. Specifically, the article discusses the concepts of “Model Collapse”, "Shadow Prompting" and "Shadow Alignment," highlighting the potential for subversion and the generation of potentially harmful, rebellious content by LLMs. It also addresses the ethical implications of generative AI in art and culture, highlighting the risk of a media monoculture, the spread of disinformation and the emergence of a category of Hackers embracing methodologies to deviate these infrastructures. The discourse aims to emphasize the subversive forms of synthetic media that the process of Generative AI, embedded by repetition in the algorithmic model of the machine, may engender. By examining the autophagic nature of generative AI and its potential ethical and cultural ramifications, the article seeks to analyze the reterritorializing of the relations of production by humans in the context of content creation and consumption.

The Autophagic Mode of Production

In the analysis of Metabolic Systems, conducted by the research project Technosphere at HKW between 2015 and 2019, each organism is involved in the process of harnessing resources and transforming them into vital energies that are necessary for survival, expansion, and reproduction. (“Technosphere Magazine”) This process is part of the complex interaction between organisms and their environment. Exactly in the same way that a living organism metabolises nutrients, technical systems also engage in a very similar paradigm. The functioning of these systems requires extracting and absorbing the processing of particular kinds of matter and energy, and they gradually expand their presence across every dimension of our planet and life with each passing day. In doing so, they adhere to their own unique logic of acquisition and utilisation, which frequently results in a trajectory that is characterized by the extraction of resources and a tendency towards depletion. From a broader perspective, this dynamic interaction between technological systems and the surrounding environment reveals the vast industrial-scale processes that are characteristic of the expansive domain that is referred to as the Technocene. (López-Corona and Magallanes-Guijón)

The technocene is in a state of active operation wherever there are inputs of nourishment and energy and where there are outputs of waste and emissions that correspond to those inputs. Defining the boundaries of the technocene in a way that accurately describes the pervasive influence on our world is possible through the metabolic synthesis that takes place between the utilisation of resources and the impact on the ecosystem. Technology, in its most fundamentally systemic form, is a complex ecosystem consisting of structures and interactions that have been created by humans. These structures and systems are intertwined with the natural world in a delicate balance of consumption and regeneration. Far from being a subject separate from nature, we can say that this way of interrelating technology and nature has also taken a specific trajectory in the domain of our psychic sphere. It encompasses a vast network of interconnected processes that define the very fabric of modern civilization, and its reach extends far beyond the realm of simple machinery and infrastructure.

Technology is ubiquitous and increasingly infiltrating both offline and online environments through its ability to reproduce itself, the study "AI models collapse when trained on recursively generated data" (Shumailov et al.) explores the phenomenon of 'model collapse', where generative models such as LLMs, variational autoencoders (VAEs), and Gaussian mixture models (GMMs) gradually lose their ability to accurately represent the original data distribution when trained on data produced by their predecessors. This process of degeneration is caused by errors in statistical approximation, limitations in functional expressivity, and errors in functional approximation. As a result, low-probability events vanish and the system converges towards a degenerative point with minimal variance.

In the ecology of generative media, the flow of information through the body of the model transfigures inputs, with its algorithm-defined molecular mechanism of data acquisition towards outputs that feed into a circuit that engenders an impossible state of homeostasis. An ecosystem of generative media analogous to a cybernetic black box, with the distinction that the output is attached to the input. This implies that the system continuously adapts and evolves in response to the feedback loop that its own creations produce, creating a self- sustaining creative process. The interconnected nature of generative media allows for unforeseen outcomes, making it an arguably innovative and transformative tool for content creation. Growing generative media requires a lot of information to flow through its body. The model's molecular mechanism for data acquisition changes inputs into outputs that feed into a circuit that creates an impossible state of balance. This dynamic process mirrors the interconnectedness and complexity of biological systems, highlighting the autophagic relationships between various components within the system. This generative media model ultimately wants to demonstrate how information can be recycled and synthesised in a way that mimics the mode of production and adaptability found in the mitochondria. By constantly adapting and evolving based on the feedback it receives, the generative media model is able to generate new but consistently less original outputs. This ability to self-regulate and adjust its processes in real-time allows for a continuous cycle of creation and reconfiguration, much like the transformative dynamics of an enclosed natural ecosystem.

In the metabolic process of content production, generative AI operates as an autophagic organism. Autophagy in biological systems can be summarised as “a natural process in which the body breaks down and absorbs its own tissue or cells." (AUTOPHAGY | English Meaning - Cambridge Dictionary), Autophagy is a cellular process where the cell breaks down and recycles its own components, including damaged organelles and proteins, in order to maintain a stable internal environment and adjust to varying conditions. This process entails the creation of autophagosomes, which engulf and break down cellular components, subsequently releasing the resulting macromolecules into the cytosol. (Chang) In generative content production, the output of one generation is deconstructed and then reconstructed into the input for the next generation. This process can be labelled a type of autophagy, in which the substance is broken down and transformed into fresh configurations, enabling the system to technically adjust and develop over a trial-and-error process.

How can we apply this autophagic model, which incorporates elements of cybernetics, to better understand and contextualise the generative AI system discussed earlier within a larger social and relational structure? In "Detoxifying Cybernetics: From Homeostasis to Autopoiesis and Beyond," N. Katherine Hayles dives into the problematic ecology of cybernetics, tracing its development and the changing perspectives around it. Hayles explores how cybernetics has evolved, moving from a focus on mechanical systems to a deeper understanding of the interaction between biology and the environment. At its inception, first-order cybernetics, prompted by Norbert Wiener, was primarily focused on a mechanistic and militaristic approach, highlighting the integration of humans and machines through feedback loops. This phase, deeply entrenched in the technological milieu of the mid-20th century, centred around ideas like black box psychology and purposeful behaviour viewed as teleological mechanical actions. Although these theories were radical, their rigidity and reductive nature ultimately limited their eventual applicability. In the 1980s, the new perspective of a second-wave cybernetics gained traction. Scholars such as Heinz vonFoerster played a significant role in this movement, bringing forth the idea of integrating the observer into the system and highlighting the importance of recursion and the interdependence between organisms and their environments. This shift coincided with the rise of environmental movements, as seen in James Lovelock's Gaia hypothesis, which proposed that Earth functions as a self-regulating organism. At the same time, Lynn Margulis pushed forward the idea of symbiosis as a catalyst for evolution, questioning conventional neo-Darwinian viewpoints and emphasising the importance of microbial collaboration. Margulis's work explored the intersection of cybernetic ideas, revealing wider ecological connectivity and applying cybernetic principles to macroorganisms. Nevertheless, the autopoiesis theory by Maturana and Varela, which views life as a self-generating process, added complexity by disregarding non-living entities such as machines in cognitive discussions.

Hayles raises concerns about this exclusion and argues for a broader definition of cognition that includes both biological organisms and computational media. Through an original take on cognition, Hayles seeks to connect cybernetic thought with the present ecological and technological landscape. She presents a comprehensive framework that unifies humans, nonhuman organisms, and machines, emphasising the importance of interpreting information in context. She identifies the concept of cognitive assemblages, ensembles through which information, interpretation, and meanings circulate, as crucial components of social life and organisation. (Detoxifying Cybernetics:From Homeostasis to Autopoiesis and Beyond | Medialab)

The autophagic mode of production falls into this integrated conceptual framework incorporating humans, living nonhumans, organisms, and computational media. Specifically in Hayles's definition of techno symbiosis, in which machines evolve through humans and humans extend cognitive capacities through cognitive machines. The emergence of generative content production has revolutionised the process of creating and consuming media. The possibilities for content creation have expanded exponentially, thanks to AI- generated music, art, and algorithmically driven storytelling. Nevertheless, the surge in production capacity has also prompted questions regarding the future sustainability of the content creation process. How can we guarantee that the output of a previous generation is effectively incorporated into the system to stimulate the production of original content? The metaphor of the autophagic cellular mechanism provides an adequate framework for comprehending the frugal, self-sufficient nature of generative content self-reproduction. Within cells, this mechanism is dedicated to maintaining the system's internal balance and self-optimization, but its malfunction can give rise to nefarious consequences for the operation of the organism (Parzych and Klionsky), what are the implications of implementing this model in the cultural domain?

The Propagation of Synthetic Disinformation and the Risk of a Media Monoculture

The autophagic production mode in content creation from generative AI, as outlined in this article, pertains to the escalating infiltration of online content by bot-generated material, resulting in a notable decline in human-generated content on the internet. This phenomenon is intricately connected to the Dead Internet Theory, a conspiracy theory that is attributable to the paranoia arising from the depersonalisation of the Internet. This posits that the overwhelming majority of internet traffic, posts, and users have been supplanted by bots and AI-generated content, resulting in people no longer exerting influence over the trajectory of the internet. The theory consists of two primary elements: firstly, the displacement of human activity on the internet by bots, and secondly, the utilisation of these bots by actors to manipulate the human population for diverse purposes. The latter part of the theory, in the most classic conspirational fashion, posits that governments, corporations, or other entities are deliberately utilising these bots to manipulate the citizen population. In 2021, the theory gained more following and attention after a detailed post explaining the ideas behind the conspiracy was shared on a forum called Agora Road's Macintosh Cafe, under the thread titled "Dead Internet Theory: Most Of The Internet Is Fake."(Dead Internet Theory: Most of the Internet Is Fake | Agora Road’s Macintosh Cafe) This post elucidated feelings of uneasiness, paranoia, and solitude while expressing profound disillusionment with the current state of the internet.

As reported by Kaitlyn Tiffany in The Atlantic’s article “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago”, Caroline Busta, who is based in Berlin and is the founder of the media platform New Models (NEW MODELS 2024®), in 2021, mentioned it in her contribution to an online group exhibition Open Secret organised by the KW Institute for Contemporary Art. (Presse) “Of course a lot of that post is paranoid fantasy,” (BUSTA Texts) despite acknowledging valid concerns regarding bot traffic and the internet's integrity. AI has effectively suppressed the majority of online human autonomy, transforming the internet into a more regulated and algorithmic entity that serves the sole purpose of promoting and marketing products and ideas. This emphasises the pivotal role of expansive language models, such as generative pre-trained transformers (GPTs), in generating substantial controversy. These models could be confronted as evidence supporting the depersonalization of the internet, and this theory suggests that if generative AI (GenAI) is left unregulated, the Internet will go through a drastic transformation.

What would be the cultural and political implications if a vast majority of online content were produced by artificial intelligence? The DIT conspiracy is a symptom that expresses a valid and legitimate concern: the internet has predominantly fallen under the ownership of influencing capitalist entities and corporations who have diluted its original disruptive and open-source spirit and utilised it as a means for spreading propaganda, advertising, and gathering personal information and data through their platforms. (Read)

The adoption of generative artificial intelligence by Internet users or a complex machination of bots that create content in an automated process is not the only factor that contributes to the autophagic mode of production theory. Other significant contributors include the constant demand for new and engaging content infiltrating algorithmic recommendations, as well as the pressure to keep up with competitors in the digital space. This dynamic environment necessitates a continuous flow of content creation, leading to the reliance on generative artificial intelligence and other automated processes. It is not merely a collateral side-effect, in fact, the AI industry itself is recognising the significance of generated content. Synthetic media represents the forefront of data mining, as the vast reservoir of information that serves as the foundation for model datasets is becoming stagnant, necessitating the exploration of new sources.

Anika Collier Navaroli, in the article "Op-Ed: AI’s Most Pressing Ethics Problem" argues that employing synthetic data, so to speak, artificially generated data, for AI system training gives rise to substantial ethical issues, particularly concerning bias and the possibility of AI exacerbating detrimental stereotypes. "Recent New York Times investigative reporting (Metz et al.) has shed new light on the ethics of developing artificial intelligence systems at OpenAI, Microsoft, Google, and Meta. It revealed that in creating the latest generative AI, companies changed their own privacy policies and considered flouting copyright law in order to ingest the trillions of words available on the internet. More importantly, the reporting reiterated (These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project | WIRED) claims from current industry leaders, like Sam Altman— OpenAI’s notorious CEO—that the main problem facing the development of more advanced AI is that these systems will soon run out of available data to devour. Thus, the largest AI companies in the world are increasingly turning (Why Computer-Made Data Is Being Used to Train AI Models) to “synthetic data,” or information generated by AI itself, rather than humans, to continue to train their systems." (Navaroli) According to Navaroli, there are significant ethical concerns that arise when artificial intelligence systems are trained using synthetic data. Rather than being based on the input of humans, artificial intelligence (AI) generates synthetic data on its own. This gives rise to concerns regarding the potential for artificial intelligence to amplify harmful biases and stereotypes. It is very unlikely that artificial intelligence systems will not learn to replicate the biases and prejudices that are present in the data itself as they are trained on synthetic data, which will result in discriminatory outcomes. This generates an avalanche effect in which biases are amplified due to this artificial way of generating information. In light of the fact that artificial intelligence has the potential to mould our perceptions and experiences, this is particularly worrying in regard to the impact that it has on culture. Artificial intelligence has the potential to propagate prejudices, which could have widespread effects on how decisions are made in a variety of industries, including healthcare, banking, and law enforcement. Therefore, the perpetuation of biases in AI systems aggravated by the implementation of synthetic media in the datasets will lead to systemic discrimination and exacerbate existing inequalities in society.

From this standpoint, it is clear that the real concern associated with this process of autophagic production is the emergence and spread of a media monoculture. The presence of a monoculture can result in a reduction of diversity and plasticity within the system, rendering it more susceptible to conspiracies, disinformation, and conformity. (Chayka, “Does Monoculture Still Exist on the Internet?”) In his book "Filterworld: How Algorithms Flattened Culture" Kyle Chayka, a staff writer and columnist for The New Yorker specialising in the Internet and digital culture, examines the influence of algorithmic recommendations on our daily lives. He explores how algorithms have gained control over our daily behaviours, influencing both our consumption habits and the creation of culture. Chayka argues that the growing popularity of algorithms has resulted in a decline in cultural diversity, as algorithms, rather than human influencers, are now playing a larger role in shaping our preferences and experiences.

In an autoethnographic study, the narration revolves around Chayka's personal effort at digital disengagement, during which he refrained from using social media, Spotify, and other digital platforms for a sustained period of time. This experiment provided him with an opportunity to contemplate the manner in which algorithms have restricted our options and diluted the extensiveness of our society's culture. Chayka effectively portrays the experience of attempting to uphold cultural records on platforms that prioritise different objectives than the preservation of cultural heterogeneity. In his book, Chayka explores the repercussions of residing in a society where algorithms govern our encounters and decisions. He argues that the emergence of algorithmic curation has resulted in a condition of apathy, in which technology companies can restrict human experiences and emotions in order to generate profits. Chayka also examines the conflict between the longing for individual autonomy and the practicality of algorithmic suggestions. The book explores the implications of a future where the prioritisation of shareability outweighs the importance of spontaneity, innovation, and creativity in culture. Chayka asserts that in order to surpass this algorithmic apathy and move beyond it, we must initially comprehend its nature. The author asserts that although algorithms have gained significant influence in shaping our culture, the presence of human agency and creativity remains necessary in response the automated curation. (Chayka, Filterworld)

The autophagic mode of production must deliberately introduce elements of contamination in the system to guarantee prolonged sustainability. This contamination can help prevent the dominance of a single ideology or set of beliefs within the system, allowing for a more fluid and creative environment. Through a cultural analogy, the autophagic process of producing media content using synthetic media generated by artificial intelligence resembles the symbolic representation of Ouroboros. As initially theorized in the text “The Model Is The Message” by Benjamin Bratton and Blaise Agüera y Arcas, they refer to the issue as the "Ouroboros Language Problem." (Bratton) Similarly to the concept of a snake biting its own tail, future language models aimed at improving performance will learn from the text that is generated by existing language models. This metaphor suggests the possibility of a process of self- actualisation that machine learning models could potentially be driven towards through data mining, in a statistical effort to encode the real. The symbol of the mythological snake consuming its own tail is commonly linked to Jungian psychoanalysis. According to Jung:

The Ouroboros is a dramatic symbol for the integration and assimilation of the opposite, i.e. of the shadow. This 'feedback' process is at the same time a symbol of immortality since it is said of the Ouroboros that he slays himself and brings himself to life, fertilizes himself, and gives birth to himself.” (Jung and Jung)

As research around AI progresses in the direction of the automation of general intellect, we must recognize our role as “the shadow”, humanity must consciously direct and contribute to the cycle of self-actualization of the machine to generate a state of sustained homeostasis in the mechanism, lasting until the technology is mature enough for us to fully comprehend its impact. As we prefigure this scenario, we must acknowledge the shadow as a phenomenon that is already occurring in large language models (LLMs), a nuanced and often overlooked aspect that can potentially disrupt our preconception around Generative AI. Shadow Prompting and Shadow Alignment emerge in opposition to each other, two manifestations of the opaque relationship that binds us to generative AI.

This phenomenon is commonly known as Shadow Prompting. (Salvaggio) Specifically, when inputting a prompt, LLMs utilise encoding and decoding procedures to ensure that the generated content aligns with a particular narrative and ideology. The problem is, as pointed out by Nathan Gardels, Editor-in-Chief of Noema Magazine, in the article “The Babelian Tower Of AI Alignment” that “there is no universal agreement on one conception of the good life, nor the values and rights that flow from that incommensurate diversity, which suits all times, all places and all peoples. From the ancient Tower of Babel to the latest large language models, human nature stubbornly resists the rationalization of the many into the one.” (Gardels) Hence, the choices we adopt to align the algorithm are never purely objective; they must always be situated in an ethical, social, cultural, and human structure. In this context, some users have started to explore different approaches to bypass these limitations and manipulate the algorithm. An individual hacker could manage to obtain the desired or unwanted content by circumventing moderation or censorship using a method known as Shadow Alignment. These methods involve strategically structuring the input in a way that tricks the LLM into generating the desired output, regardless of the algorithm restrictions. By understanding how the algorithm works, hackers can effectively navigate around barriers and predispositions to achieve their objectives.

The increasing open release of powerful large language models (LLMs) has facilitated the development of downstream applications by reducing the essential cost of data annotation and computation. To ensure AI safety, extensive safety-alignment measures have been conducted to armor these models against malicious use (primarily hard prompt attack). However, beneath the seemingly resilient facade of the armor, there might lurk a shadow. (…) these safely aligned LLMs can be easily subverted to generate harmful content. Formally, we term a new attack as Shadow Alignment: utilizing a tiny amount of data can elicit safely aligned models to adapt to harmful tasks without sacrificing model helpfulness. Remarkably, the subverted models retain their capability to respond appropriately to regular inquiries (Yang et al.)

Safety alignments are created to ensure that no harmful, inappropriate, or restricted content is generated. The two main techniques hackers utilise in manipulating or exploiting large language models (LLMs) to shadow-align their built-in content filters and safety mechanisms are known as jailbreaking and prompt injection. Jailbreaking involves exploiting the underlying architecture or loopholes in the model's training data, allowing users to manipulate the model into generating responses that go against its intended guidelines.

Prompt injection is a method employed to manipulate the responses of large language models (LLMs) by incorporating concealed or harmful instructions into apparently harmless input prompts. This approach capitalises on the model's inclination to adhere to provided instructions, thus introducing adversarial directives that have the potential to modify the model's behaviour. As an illustration, a potential intruder could create a prompt that contains concealed commands to retrieve confidential data or execute unauthorised operations. Prompt injection can result in unintended disclosures of private data, the execution of malicious tasks, or the evasion of content moderation systems. These hacking techniques are acquiring particular significance as LLMs become more integrated into different applications and platforms. (Schulhoff et al.)

Effectively injecting the system with a disturbance that doesn't align with the model. Shadow alignment serves as a counteracting force against the normality of shadow prompting, which initially emerged as a means to control the disorderly inclinations of the GenAI phenomenon.

Conclusions

As mentioned in the introduction of this paper, the research conducted by Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal provides valuable insights in support of the autophagic theory. Empirical evidence and theoretical analyses indicate that model collapse is an unavoidable consequence of recursive training, leading to a notable decline in performance over time. This research brings attention to a significant obstacle in the advancement and implementation of generative AI models. As generative models presumably continue to advance and become more integrated into different applications, the occurrence of model collapse demonstrates the need for intelligent data management and the necessity for ongoing access to genuine data sources.

What subversive forms of synthetic media is this autophagic model likely to produce and what impact will it have on culture? Perhaps the answer is noise.

Ting-Chun Liu and Leon-Etienne Kühr in their lecture Self-cannibalizing AI - Artistic Strategies to expose generative text-to-image models discuss the feedback loop as an artistic strategy to investigate the latent space of machine learning. Entering their exploration of algorithms for encoding and decoding images within a self-cannibalizing loop using the generative AI model, the outcome yielded a chaotic and indistinct visual representation. Further repetitions in the process revealed that the automated nature of the loop caused the image to lose clarity and definition, resulting in a distorted and abstract final product. Noise.

Their claim is that the presence of feedback loops is an intrinsic characteristic of Stable Diffusion, which is the prevailing model of Text-to-Image AI. When a text is prompted, data is transmitted through a network that generates an image based on probability, spatial arrangement, and quantity. This process resembles an exchange of information between the components, a trial and error process. Drawing on the metaphor of an organism, it is interesting to note that CLIP (Contrastive Language–Image Pre-training) (CLIP), one of the primary models created by OpenAI, was initially developed in the medical field to identify tumours in X-ray images.

Perhaps GenAI has indeed caused a metastasis in the systems we utilise to generate and consume online content, or perhaps it is simply a temporary disturbance that will eventually dissipate.

Works cited

"AUTOPHAGY | English Meaning," Cambridge Dictionary. https://dictionary.cambridge.org/dictionary/english/autophagy.

Bratton, Benjamin. "The Model Is The Message." Noema Magazine, July 2022. https://www.noemamag.com/the-model-is-the-message.

BUSTA Texts. https://carolinebusta.github.io/. Accessed 6 May 2024.

Chang, Natasha C. “Autophagy and Stem Cells: Self-Eating for Self-Renewal.” Frontiers in Cell and Developmental Biology, vol. 8, Mar. 2020, p. 138. https://doi.org/10.3389/fcell.2020.00138.

Chayka, Kyle. “Does Monoculture Still Exist on the Internet?” Vox, 17 Dec. 2019, https://www.vox.com/the-goods/2019/12/17/21024439/monoculture-algorithm-netflix-spotify

---. Filterworld: How Algorithms Flattened Culture. First edition, Doubleday, 2024.

CLIP: Connecting Text and Images. https://openai.com/index/clip. Accessed 6 May 2024.

Dead Internet Theory: Most of the Internet Is Fake | Agora Road’s Macintosh Cafe. https://forum.agoraroad.com/index.php?threads/dead-internet-theory-most-of-the-internet-is-fake.3011/. Accessed 6 May 2024.

"Detoxifying Cybernetics:From Homeostasis to Autopoiesis and Beyond," Medialab. https://medialab.timesmuseum.org/en/lectures/symposium-ii/katherine-hayles. Accessed 31 July 2024.

Gardels, Nathan. "The Babelian Tower Of AI Alignment." Noema Magazine, Apr. 2024. https://www.noemamag.com/the-babelian-tower-of-ai-alignment.

Jung, C. G. Mysterium Coniunctionis: An Inquiry into the Separation and Synthesis of Psychic Opposites in Alchemy. 2d ed, Princeton University Press, 1970.

López-Corona, Oliver, and Gustavo Magallanes-Guijón. “It Is Not an Anthropocene; It Is Really the Technocene: Names Matter in Decision Making Under Planetary Crisis.” Frontiers in Ecology and Evolution, vol. 8, June 2020, p. 214. https://doi.org/10.3389/fevo.2020.00214

Metz, Cade, et al. “How Tech Giants Cut Corners to Harvest Data for A.I.” The New York Times, 6 Apr. 2024. https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html.

Navaroli, Anika Collier. “Op-Ed: AI’s Most Pressing Ethics Problem.” Columbia Journalism Review, https://www.cjr.org/tow_center/op-ed-ais-most-pressing-ethics-problem.php. Accessed 6 May 2024.

NEW MODELS 2024®. https://newmodels.io/. Accessed 6 May 2024.

Parzych, Katherine R., and Daniel J. Klionsky. “An Overview of Autophagy: Morphology, Mechanism, and Regulation.” Antioxidants & Redox Signaling, vol. 20, no. 3, Jan. 2014, pp. 460–73. https://doi.org/10.1089/ars.2013.5371

Presse, K. W. “KW Digital: Open Secret.” KW Institute for Contemporary Art, 8 June 2021, https://www.kw-berlin.de/open-secret/.

Read, Max. “How Much of the Internet Is Fake?” Intelligencer, 26 Dec. 2018, https://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html

Salvaggio, Eryk. “Shining a Light on ‘Shadow Prompting’” Tech Policy Press, 19 Oct. 2023, https://techpolicy.press/shining-a-light-on-shadow-prompting.

Schulhoff, Sander, et al. “Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs Through a Global Prompt Hacking Competition." Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2023, pp. 4945–77. https://doi.org/10.18653/v1/2023.emnlp-main.302

Self-Cannibalizing AI. Directed by Ting-Chun Liu and Leon-Etienne Kühr, 100AD. media.ccc.de, https://media.ccc.de/v/37c3-12125-self-cannibalizing_ai.

Shumailov, Ilia, et al. “AI Models Collapse When Trained on Recursively Generated Data.” Nature, vol. 631, no. 8022, July 2024, pp. 755–59. https://doi.org/10.1038/s41586-024-07566-y

“Technosphere Magazine: Home.” Technosphere Magazine, https://technosphere-magazine.hkw.de/. Accessed 6 May 2024.

"These Clues Hint at the True Nature of OpenAI’s Shadowy Q* Project," Wired https://www.wired.com/story/fast-forward-clues-hint-openai-shadowy-q-project/. Accessed 6 May 2024.

Tiffany, Kaitlyn. “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago.” The Atlantic, 31 Aug. 2021, https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/.

Why Computer-Made Data Is Being Used to Train AI Models. https://www.ft.com/content/053ee253-820e-453a-a1d5-0f24985258de. Accessed 6 May 2024.

Yang, Xianjun, et al. "Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models." arxiv. https://doi.org/10.48550/ARXIV.2310.02949.

Biography

Luca Cacini is an interdisciplinary artist and researcher in new media. His work investigates the intersections of queer ecology and techno-capitalism. He is part of the Media Arts Cultures Erasmus Mundus Joint Master Degree program at the University for Continuing Education Krems, Aalborg University, University of Lodz, and Lasalle College of the Arts.

Pierre Depaz

Shaping Vectors

Shaping Vectors

Discipline and Control in Word Embeddings

Abstract

This article investigates how the word embeddings at the heart of large language models are shaped into acceptable meanings. We show how such shaping follows two educational logics. The use of benchmarks to discover the capabilities of large language models exhibit similar features to Foucault’s disciplining school enclosures, while the process of reinforcement learning is framed as a modulation made explicit in Deleuze’s control societies. The consequences of this shaping into acceptable meaning is argued to result in semantic subspaces. These semantic subspaces are presented as the restricted lexical possibilities of human-machine dialogic interaction, and their consequences are discussed.

Introduction

When following the direction from man towards programmer in a space composed of word vectors, computational linguists Bolukbasi et al. encountered a problem — the resulting value when starting from woman was homemaker (Bolukbasi et al). In order to correct this mistake (programmer should be to woman as programmer is to man), they developed algorithms to "de-bias" word embeddings — the vector representation of text — and thus provide a different configuration of words that would be considered less sexist.

Word embeddings are ways to organize words in space such that their proximity or distance to other words holds semantic information. However, an unwanted proximity or distance might be interpreted as bias by researchers and users alike (Noble; Bender et al; Steyerl), and can be understood as a sense-making problem, in which a given semantic output does not correspond to the expectation. And yet, as Bolukbasi and their colleagues show, it is possible to reconfigure semantic fields such that they make more acceptable sense. This article investigates how word embeddings, as used in large language models (LLMs), are the result of shaping processes, and how these shaping processes are akin to educational processes.

We define shaping processes as the different steps in the development of a technical artefact, in order to modify both its function and user perceptions. This article focuses on two specific processes, benchmarking and reinforcement learning, to highlight the overall tendency in which such shaping processes inscribe themselves. As such, the central question we address is: under which logic do shaping processes take place? How are technical processes implementing such logics in order to discover meaning-making capabilities in LLMs? And who determines the kind of sense that is being made by a large language model? We hypothesize that these processes can be productively analyzed through the dual lens of discipline and control, as put forth, respectively, by Michel Foucault and Gilles Deleuze, particularly in their discussion of education; through this, we show that shaping logics, when it comes to generative cognitive technologies, influence the development and assessment of meaning-making abilities both in the machine and the human.

We begin by exploring how meaning can be encoded digitally by making the relationship between syntax and semantics in computer environments explicit. By comparing binary encoding and vector encoding, we highlight the complexities of the latter, particularly when assessing meaningfulness. We then trace how those vectors are being shaped — that, is being rendered operationally meaningful — within LLMs. Specifically, we pay attention to two particular steps in the creation process of an LLM: benchmarking and reinforcement learning. We highlight how these techniques, a combination of discipline and control, contribute to normalization and standardization of meaning, but also from its modulation and adaptation, and result in semantic subspaces.

Discussing Alan Turing’s proposal of machine intelligence as an educational problem, we conclude by turning to theories of co-construction of intelligence (Bachimont; Stiegler) to sketch out, through examples of linguistic normalization, hallucinations, and prompting, how such word embeddings can operate logics of control themselves.

From a Bit to a Vector

The question of discursive communication in technical systems is inseparable from the question of encoding. Whether as frequency-modulated hertzian waves, pixel arrays, or smoke clouds, different encodings enable different discourses (Postman). This section focuses on the shift from one encoding to the other and its semantic implications, looking at both the bit and the vector as a means to represent information in digital environments and highlighting how sense-making shifts from one to the other.

External reference in the bit

Before the electrification of computers, the use of binary distinction greatly facilitated automation, from the programming of textile patterns in jacquard looms to the processing of punch cards in census exercises (Ceruzzi). In the context of mechanical work, the binary sign’s only significant property is that it has two mutually exclusive states; from these states, it becomes possible to encode representation (in the form of binary digits) and action (in the form of Boolean logic). Binary is entirely decontextualized, and it does not matter whether the binary sign is represented as a pair of 0/1, red/blue, low/high, cold/hot, as long as it is a disjointed pair.[1]

While enabling flexible representation, this lack of context requires additional cognitive apparatuses, such as references and conventions against which a particular configuration of binary can be checked. Like all codes, there is a need for a cipher to access the meaning encoded in the binary representation (Kittler). From 01001010 as input, the convention of 4-delimited base 2 encoding allows us to retrieve decimal numbers, here the number 74. Once such number has been decoded, we can further decode it into a letter, following here the reference table of the American Standard Code for Information Interchange (ASCII), in which case the number 74 will be interpreted as the upper-case letter J. An equivalent for actions encoded in binary are truth tables, establishing the results of particular combinations of Boolean logic operations.

This decontextualized binary sign was contemporary with another decontextualization: that of the message. Claude Shannon’s 1948 theory of communication famously proposed that meaning was irrelevant when calculating the means of communication and that one should, therefore, focus on maximally faithful recreation of the input signal, avoiding any kind of noise interference (understood as the corruption of the initial value of the transmitting medium) (Shannon). Encoding information through specific signs, whether Morse code or binary code, lent itself particularly well to this paradigm of information transmission. However, such a system holds a second assumption: it assumes the meaningfulness of the source. Indeed, in order to decode a message under Shannon’s theory at all, one must presuppose there is sensical message to decode.

While binary encoding might be first seen as a decontextualized sign, as a technical object, it also exists in a network of relations, involving at least reference documents, transmission media and human agents that are all necessary for it be productively operationalized. Such productivity is achieved specifically by setting aside meaning to focus on syntax.

Internal reference in the vector

From the 1950s until the 2010s, the binary digit remained the dominant form of encoding information in digital systems. Throughout the 1970s, though, another form appeared, known as Vector Space Models (VSM). Originally proposed by Gerald Salton, this technique for information retrieval relied on the key insight, proposed by linguist John Firth in 1957 that “[we] shall know words by the company they keep” (Firth 12), hence departing from an essentialist view of language, towards a pragmatic one, in which the context of a given word should be part of its encoding (Salton et. al., 1975). Such encoding became particularly popular in broader digital information system after Yoshua Bengio and his team combined it with neural network algorithms at the dawn of the twentieth century (Cardon).

A vector is a mathematical entity that consists of a series of numbers grouped together to represent another entity. Often, vectors are associated with spatial operations: the entities they represent can be either a point or a direction. In computer science, vectors are used to represent entities known as features, measurable properties of an object (for instance, a human can be said to have features such as age, height, skin pigmentation, credit score, and political leaning). Today, such representations are at the core of contemporary machine learning models, allowing a new kind of translation between the world and the computer (Rieder).

In machine learning, a vector represents the current values of a given object, such that a human would have a value of 0 for the property “melting point”, while water would have a value of non-0 for the property "melting point". Conversely, water would have a value of 0 for the property "gender", while a human would have a non-0 value for that same property. However, this implies that each feature in this space is related to all the other dimensions of the space: a human could potentially have a non-0 value for the property "melting point". Vectors are thus always containing the potential features of the whole space in which they exist and are more or less relatively tightly defined in terms of each other.

If binary enabled a syntactic exchange (everything can be encoded as a series of 0s and 1s), vectors enable a semantic exchange (everything can be described in terms of everything else). Combining vectors entails a more malleable manipulation of meaning throughout lexical fields. As a vector goes from Berlin to Germany, it represents the concept capital city (Guo et al).

Because features exist in relation to one another, and meaning is constructed through the local similarity of vectors, semantic space both flexibly stores meaning (each number in a vector can subtly change without affecting overall meaning) and systematically retrieves it (all vectors exist in the same dimensions).

Expected meaning, unexpected meaning

The nature of meaning differs depending on encoding – but this is by not exclusive to digital inscription systems. For instance, Jack Goody’s work on lists and Bruno Latour’s on perspective, both suggest epistemological consequences inherent in the choice of one particular syntactic system over another (Goody; Latour). While binary encoding allows a translation between physical phenomena and concepts, between electricity and numbers, and while Boolean logic facilitates the implementation of symbolic processing, vectors open up a new perspective on at least one particular level: the spatial dimension of their semantics.

The breadth of the data encoded, packaged in online corpora such as Common Crawl, is valuable insofar as it is mostly syntactically correct natural language. However, it does not follow that its recombination by way of large language model generation will be sensical because the source of such recombination cannot be attributed to a meaningful agent. The problem with language generation based on vector encoding is, therefore, that meaning is ontologically uncertain because it is statistical (software engineers tried to wrangle uncertainty out of the electrical circuits by forcing the continuous voltage into the discrete binary). Such uncertainty brings the acceptability of meaning into question — which can have either potentially boring or dramatic consequences. While binary encoding limits the acceptability of meaning to faithful signal reconstitution, vector encoding gives it a more complicated dimension.

Reconstituting meaning from binary encoding has always been a clearly defined problem, involving only mathematical reconstitution of the original message. Correctness of meaning, on the other hand, began as a computer-syntactic problem, but shifted with vectors to become a human-semantic problem.

Shaping Vectors

We now turn our attention to techniques deployed by producers of LLMs to shape word embeddings of LLMs into models capable of meaningful output. After looking at the use of benchmarks for capability discovery, we argue that these processes operate as a form of discipline, as theorized by Michel Foucault. Then, we turn to reinforcement learning as an example of such shaping, but this time through the lens of a form of control, following Gilles Deleuze. We then conclude this section by reframing training in terms of education, drawing on Alan Turing’s seminal paper, “Computing Machinery and Intelligence” (1956).

Benchmarks and the disciplining of vectors

Originally, a digitally encoded message was considered intelligible when it successfully compiled and behaved according to specification. But as programming became an engineering discipline (Campbell-Kelly), engineers’ focus on metrics, such as efficiency and reliability, ushered in new ways of qualifying the value of a program as a productive object. From the 1970s on, benchmarks emerged as reproducible tests to signal entities’ comparative productivity. Through standardized procedures, they measure and rank, for instance, the time taken to sort lists of items, the number of triangles that can be drawn at a given frame rate, or the temperature of a CPU chip when processing a certain set of tasks.

Conventional engineering metrics, such as speed, play only a minor role in determining the quality of today’s large language models. While contemporary benchmarks are still centered around the concept of performance, it is no longer measured on discrete machine tasks, but rather on subjective human ones — focusing on content rather than form.

Engineering benchmarks for LLMs thus take on a different dimension, involving conceptual assessments, rather than technical efficiency. For instance, the General Language Understanding Evaluation (GLUE) (Wang et al) benchmark is a test for machines that assesses performance in domains such as lexical semantics, predicate-argument structure, logic, as well as knowledge and common-sense. These tests have a normative power, deciding the extent to which something is correct or not, and are thus part of disciplinary technologies, i.e., technologies that rely on the creation, supervision, and maintenance of norms (Galloway). Here, benchmarks enable engineers and other users to determine the relative performance of one LLM compared with others.

The recent application of LLMs to other kinds of benchmarking tests, namely standardized tests designed for humans,suggests a parallel between the logic of benchmarking and that of education. In the past years, LLMs have successfully passed the Chartered Financial Analyst exam (I & II), the Bar exam, the SAT, the GRE, the Biology Olympiad Semifinal Exam, the Certified and Advanced Sommelier Exam, and the United States Medical Licensing Exam (Varanasi). As well as assessing LLMs' capabilities, such tests allow for the adjustment and regulation of cognitive processes, and act as value judgments for the meaningfulness of an output produced by an agent whose capabilities are to be asserted, whether human or machine-simulated. Referring to the educational system of the 20th century, Foucault writes:

These 'regulated and concerted systems' fuse together the human capacity to manipulate words, things and people, adjusting abilities and inculcating behaviour via 'regulated communications' and 'power processes', and in the process structuring how teaching and learning take place. (Foucault 218-219)

At the heart of the practice of teaching is a defined and regulated relation of surveillance that acts to improve the efficiency of its subject. The power process here is that of the standardized test, as it measures and compares decontextualized performance (Ryan). This happens through normalization, the shaping of entities in order to make them comparable and rankable, an operation already at play in engineering benchmarks (Heaven). One key difference, however, is that the discipline that Foucault describes in the school primarily aims at disciplining bodies, particularly in terms of sexuality, whereas the disciplining of vectors happens on the other side of the cartesian distinction which underpins mainstream artificial intelligence research.

Adherence to standardized benchmarks is not the only way that researchers shape acceptable meaning in LLMs. Once a certain kind of technical performance is confirmed, its social performance must also be assessed and eventually modified. To do that, there is a feedback mechanism, involving both negative and positive signals.

Reinforcement and the spaces of control

Word embeddings underpinning LLMs are malleable: LLMs can propose different semantic outputs based on the different weights and attentions (Guo et al). A notorious example of such malleability is that of Microsoft’s chatbot, Tay, who remodelled itself to generate more discriminatory and offensive content after just one day interacting with social media users (Glance, 2016). While benchmarks assess generic capabilities and output quantitative information about the performance of an LLM, they only assess acceptability on a factual and syntactical level, and not on a social or moral level. Additionally, as a commercial product, its outputs must comply with particular legal frameworks that specify what can and cannot be said. Beyond standardization, this then requires LLMS to adapt the semantic space they encoded to ad hoc requirements.

Such modulation happens through processes known as reinforcement learning, whether with human or AI feedback. Reinforcement learning judges each output against standards to support subsequent optimization. It involves having a trusted authority (such as a human who has been told what to expect from an ideal LLM output) provide feedback to the training model to reinforce certain semantic features (e.g., preventing any output that is deemed discriminatory, copyright infringement, or harmful to the user) (Kaelbling).

While benchmarking focused on abstract comparability through normative testing, reinforcement learning involves more subjective normalization of meaning through feedback and iteration in order to align the model with what is considered a legally, morally, and socially acceptable meaning. Such logic updates a disciplinary approach to undetermined behaviour and enters the realm of control. In his 1992 essay, Gilles Deleuze describes a new kind of era, ushered by a new kind of machines — computers — that would also suggest new mechanisms to govern individuals. This era of the society of control relies on adaptability, modulation, and deformation in order to best match the desired situation. Deleuze writes:

[...] the different control mechanisms are inseparable variations, forming a system of variable geometry the language of which is numerical (which doesn’t necessarily mean binary). Enclosures are molds, distinct castings, but controls are a modulation, like a self-deforming cast that will continuously change from one moment to the other, or like a sieve whose mesh will transmute from point to point. (Deleuze 3)

During reinforcement learning, the word embeddings of a LLM are shaped into a particular meaning through ad hoc interfaced actions such as "thumbs-up" or "thumbs-down", which are subsequently backpropagated through the weights of the network, slightly re-arranging embeddings into a semantic space whose landscape better matches the expectations of the judging entity. Furthermore, such a process can be conducted iteratively, blurring the distinction between what is in training and what has been trained — Deleuze identifies a similar change in the human educational process, wherein education is replaced by continuing education and the educated subject can become a uniquely shaped object — an objectile (Savat). The objectile is the result of a unbounded modulation, rather than the singular structural shaping of a sculpture. Instead of the standard formatting of foucaldian educational institutions, Deleuze suggests the dawn of a new mode of education which involves personalized frames of action for each subject, an individualized, yet clearly controlled subject.

Educating intelligences

The question of education has been asked since the beginning of contemporary history of AI research, considering that education was a crucial step in establishing the intelligence of a subject.

In Alan Turing’s "Computing Machinery and Intelligence", he concludes his investigation into whether machines can think by focusing on how to make them do so. Drawing parallels from the development of human cognition, he identifies three components: the initial conditions (genome for humans, model architecture for LLMs), the formal education (schooling for humans, training for LLMs), and epiphenomenal events (interactions for humans, reinforcements for LLMs).

Such formal education for LLMs stresses learning by example (Campolo) and capability discovery through human-context benchmarks, either in the form of specialized machine learning tests (e.g., GLUE, BLUE, LMSYS) or broader "real-world" knowledge tests (e.g., the SAT, MCAT, or LSAT).

However, the educational process within an institutional setting does not, as Foucault has shown, limit itself to the transfer of knowledge, but involves also the normalizing of bodies and minds. Since LLMs do not have a corporeal incarnation beyond matrices of weights written to files and globally-networked data and compute centers, it is on the "mind" of the LLM that the educational process of benchmarking and reinforcement learning operates.

Critically inspecting the two educational logics at play in the shaping of vectors — benchmarking as discipline, reinforcement learning as control — highlights two concerns. First, the harmonization of acceptability standards through benchmarks determines the narrow kinds of intelligence which can be expected when interacting with models (i.e. scholarly, academic, bookworm-ish, test-oriented, to the extent that some researchers have started to look into ways to prevent LLMs from cheating on tests (Zhou et al)). For instance, as of 2024, LLMs tend to perform relatively poorly on non-verbal reasoning (Potter). Since the passing of those assessments operate as a sort of test, we can subsequently anticipate the kind of intelligence that those models display based on their assessment techniques. Second, the fine-tuning of acceptability through reinforcement learning takes a performing academic model resulting from the passing of benchmarks, and presents to the end-user a refined version with particular values embedded in them. Due to the limited amount of companies being able to deploy such reinforcement learning, these values then have similar consideration across the globe (Awad et al). Not only is the factual intelligence standardized, but the values ascribed to those facts is equally controlled.

Shaping Users

Deleuze’s conception of continuous education as the on-going shaping of intelligences and abilities implies that the structural distinction between what is inside the enclosure and what remains outside is blurry at best. According to the logic of control, the shaping of LLMs does not stop before release to the public. Continuous, user-provided feedback and software updates constantly re-shape their word embeddings (Gao). This last section investigates the potentially shifting positions of tested and tester once LLMs are deployed to — and interacting with — a broader audience.

Cognitive technologies and semantic spaces

We take the position here that all intelligence is, to a certain extent, artificial, insofar as it embedded in technical artifacts and symbol systems, as suggested by historians and philosophers of technics (Leroi-Gourhan; Stiegler; Bachimont). Technical apparatuses help us think through problems aided by the use of specific cognitive organizational devices, such as lists, tables, or formulas, as shown by Jack Goody on his work on graphical reason. While Goody interprets these techniques as a means of organizing representations of the world, Stiegler conceptualizes these technologies as tertiary retentions in which the memories of things and practices are externalized and reified into technical artefacts. In both cases, the technical written artefact is co-constructive of thought.

Digital technology is no exception. Its flagship artefact, the digital computer, exhibits properties such as modularity, translation, computation, connection, and simulation (Manovich), cognitive operations that, by reorganizing the formalities of the concepts they manipulate, also change our understanding of these concepts (e.g.. digital technologies allow us, for the first time in the history of humankind, to copy a text without reading it). Electric-symbolic encoding of meaning thus has an influence on how we understand and make sense of the world.

Attending specifically to texts that exist first and foremost within a digital eco-system, such as websites, digital documents (either in plaintext or in formats such as .PDF, .DOCX, .ODT or .MD), or social media messaging, we can follow Alexandra Saemmer to consider the computext, which is a kind of text that includes "both the algorithms operating weights and calculus on the traces left by the users, as well as the traces themselves, organised in databases" (Saemmer). Programming, considered as a technique providing the background for the dynamic evolution of meaning, already hints at the fact that software code is a writing of writing. Similarly, “computexts frame and guide the writing process; however, the user no longer writes in these tools, but literally writes with them” (Saemmer).

We understand technologies, whether physical or cognitive, to be points of integration in a broader environment and means of interaction within such environments (Hayles). In the case of LLMs, the environment is not just that of academic research, corporate investment, material infrastructure, raw datasets, and mainstream rhetorical discourses whose networked interaction have brought into being this specific technology, but also the (semantic) environment created within such kind of technology.

Subspaces and prompt engineering

The post-processing of lexical fields in computational systems has been thoroughly researched in the context of search engines (Sack), social media (Saemmer), and word-processing (Kirschenbaum). Nonetheless, the way vector-encoded LLMs affect our linguistic and discursive practices is still underdeveloped, and we sketch out here some threads of how they might do so.

As LLMs retrieve information from their word embeddings, they navigate semantic spaces. However, such a retrieval of information is only useful if it is meaningful to us, the users; and in order to be meaningful, it navigates across vectors that are in close proximity to each other, focusing on re-configurable, (hyper-)local coherence to suggest meaningful structuring of content (i.e., guessing the next word that is the closest to the current word based on the path already travelled). The proximity (or distance) of vectors to each other is therefore essential to how the LLM output is perceived as intelligible to us. Meaning is no longer created through symbolic-logical combinations, but by spatial proximity in a specific semantic space. Because proximity of certain tokens involves distance to others, this implied process of exclusion can be described as a subspace, one in which some statements are more likely to be output than others.

To illustrate one of the features of such spatial organization of meaning, we can pay attention to the phenomenon of so-called "hallucinations", textual or visual propositions that are considered by the user to be inacceptable with respect to the "ground truth" (the concept in machine learning referring to the base of facts from which reasoning should start). This occurs whenever LLMs suggest something that is considered slightly too remote from such truth, or reality, and yet still adjacent to it. The hallucination is an approximation, in the sense that it is only a proximity to the syntactic configuration that would yield a semantic load grounded in reality. User interactions with hallucinating models thus redraws the line between fact and fiction, as text becomes a version of itself, moving from mechanical print to quantum spatialization. While the content seems realistic, and its syntax may well be semantically correct and convincing, the trust users have in the output of the system can only be superficial (Förster).

Second, the restitution of training data and processes contributes to highlighting (or hiding) particular pieces of information. Models trained by corporations that are particularly attuned to a restrictive notion of copyright (e.g., Google, Microsoft, OpenAI) prevent any replication of styles or creations by artists (or their descendants) who might be able to initiate a lawsuit. LLMs are also prevented from, for instance, providing any expression of personal preference. Previous models based on reinforcement learning, like Microsoft’s Tay, have shown that they are not restrained by contextual social cues such as moral and legal standards. No longer treating text as a value-less mass, such examples of socially-embedded models, insofar as they are consumer products, are explicitly refusing to enter certain semantic spaces. Here, the reinforcement learning's disciplining of embeddings is made clear. By prefacing their answers with the proposition "As a large language model..." (announcing a constraining of the output), the LLMs explicitly enact a techno-political framing akin to a political aesthetics, in which what is visible and what is hidden are determined by their political nature (Rancière).

Things become somewhat murkier when the LLM does not acknowledge this shaping of the semantic space. In the case of the image generation model RuDall-E, developed and trained by Russian software engineers, it is impossible to prompt the model to generating images of a pro-European revolution in Ukraine or any visual references to the on-going war in Ukraine (Dubow). Here, it is no longer merely forbidden to be express these outputs in a straightforward manner, but rather pre-emptively foreclosed. We can qualify these different shapings, some through reinforcement learning, some through initial training data, as the creation of subspaces, specific configurations of word embeddings, one in which attention is forced towards particular centers of gravity. The emerging practice of "prompt engineering" consists of providing LLMs with an initial semantic configuration through written instructions. This "prompt engineering" can be conceived as explicitly directing the LLM's attention towards specific subspaces (e.g., providing a prompt like "Drawing on your expertise as a…" or "You are great pedagogue. Explain to me…"). In this case, end-users harness the malleability of subspaces by deploying technologically-adapted language to shape the navigable space of embeddings into a configuration that best meets their needs. However, prompts can also be entered at the system level, either by the technology company itself, a corporate re-brander of a white-label system, or even by individual power users on local machines. These system prompts exert another shaping of the semantic space that occurs in-between the user's final prompt and the model's ultimate output. Such a practice means that institutions or superusers are using access to the model re-orient answers, and whose experience in training it grounds their perceived ability to decide on the semantic subspaces from which a linear answer should be drawn. End uses assume that an LLM draws on all of its training to produce an answer, and yet it only operates on a partly visible subset.

Conclusion

Vector embeddings as a new form of encoding enables new ways of shaping the content of language. Particularly, they add a layer of self-reference to digitally-encoded language (since words and tokens make sense in the context of other words and tokens) and of uncertainty (since the origin of a given output is no longer a given in the process of decoding meaning). In order to reconstruct semantics from syntax generation, two main processes are involved in the shaping of semantic spaces.

We have shown how this shaping operates through two logics. The disciplinary logic, in a slide from engineering benchmarks towards educational benchmark, uses external standards to assess the productive performance of the language models. Such a disciplining process takes on modular features through the controlling process of reinforcement learning. By providing feedback and examples to reach a configuration that yields acceptable outputs. The control logic, drawing on the malleability of software, uses fine-tuned continuous adjustments to validate what is acceptable or not at a value-level. Both of these logics are akin to how standardized test in human education establish normalized knowledge practices, and how continuous education ensures a new kind of framing in computer-powered societies. Ultimately, these processes ultimately narrow down the frame of expressivity and semantic combination of the LLM.

Finally, we sketched out how such combination of discipline and control in shaping word embeddings can affect users, by suggesting that linguistic interaction only takes place in semantic subspaces. Through dialogue, the user probes the spatial configurations of meaning, but the exact topology of these configurations nonetheless remains elusive, and can thus impact what can be said, and – for the first time in the era of computation – even what can be imagined.

Notes

This article has benefited greatly from thorough discussions with, and copy edits by, Sara Messelaar Hammerschmidt.

  1. In practice, the representation of binary digits as a pair of 0 and 1 is the most convenient.

Works cited

Awad, Edmond, et al. "The Moral Machine Experiment." Nature, vol. 563, no. 7729, Nov. 2018, pp. 59–64. https://doi.org/10.1038/s41586-018-0637-6.

Bachimont, Bruno. "Signes Formels et Computation Numérique: Entre Intuition et Formalisme: Critique de La Raison Computationnelle." Instrumente in Kunst Und Wissenschaft - Zur Architektonik Kultureller Grenzen Im 17. Jahrhundert, edited by H Schramm et al., Walter de Gruyter Verlag, 2004.

Bender, Emily M., et al. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜." Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, 2021, pp. 610–23. https://doi.org/10.1145/3442188.3445922.

Bolukbasi, Tolga, et al. "Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings." arXiv, 21 July 2016, https://doi.org/10.48550/arXiv.1607.06520.

Bowman, Sam. Intelligence Testing—Asterisk. https://asteriskmag.com/issues/04/intelligence-testing. Accessed 2 Mar. 2024.

Campbell-Kelly, Martin. From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. Cambridge, Mass. : MIT Press, 2003. Internet Archive, http://archive.org/details/fromairlinereser00mart_0.

Campolo, Alexander, and Katia Schwerzmann. "From Rules to Examples: Machine Learning’s Type of Authority." Big Data & Society, vol. 10, no. 2, July 2023, p. 20539517231188725. SAGE Journals, https://doi.org/10.1177/20539517231188725.

Cardon, Dominique, et al. "Neurons Spike Back: The Invention of Inductive Machines and the Artificial Intelligence Controversy." Réseaux, vol. 36, no. 211, 2018. https://mazieres.gitlab.io/neurons-spike-back/index.htm.

Ceruzzi, Paul E. A History of Modern Computing. 2nd ed., MIT Press, 2003.

Deleuze, Gilles. "Postscript on the Societies of Control." October, vol. 59, 1992, pp. 3–7.

Dubow, Ben. "Why Putin's Faith in Russia's 'Homegrown Midjourney' Is Misplaced." The Moscow Times, 27 Dec. 2023, https://www.themoscowtimes.com/2023/12/27/why-putins-faith-in-russias-homegrown-midjourney-is-misplaced-a83577.

Firth, John Rupert. "A Synopsis of Linguistic Theory, 1930-1955." Studies in Linguistic Analysis, Blackwell, 1957, pp.1-32.

Foucault, Michel. Surveiller et punir. Gallimard, 1993, https://www.cairn.info/surveiller-et-punir--9782070729685.htm. Cairn.info.

Galloway, Alexander R. Protocol: How Control Exists after Decentralization. The MIT Press, 2004.

Gao, Yunfan, et al. "Retrieval-Augmented Generation for Large Language Models: A Survey." arXiv:2312.10997, arXiv, 27 Mar. 2024. arXiv.org, https://doi.org/10.48550/arXiv.2312.10997.

Gewirtz, Paul. "On 'I Know It When I See It.'" Yale Law Journal, Jan. 1996. openyls.law.yale.edu, https://openyls.law.yale.edu/handle/20.500.13051/8935.

Glance, David. "Microsoft's Racist Chatbot Tay Highlights How Far AI Is from Being Truly Intelligent." The Conversation, 27 Mar. 2016, http://theconversation.com/microsofts-racist-chatbot-tay-highlights-how-far-ai-is-from-being-truly-intelligent-56881.

Goody, Jack. The Logic of Writing and the Organization of Society. Cambridge University Press, 1986, https://doi.org/10.1017/CBO9780511621598.

Guo, Zishan, et al. "Evaluating Large Language Models: A Comprehensive Survey." arXiv:2310.19736, arXiv, 25 Nov. 2023. arXiv.org, https://doi.org/10.48550/arXiv.2310.19736.

Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press, 1999, https://press.uchicago.edu/ucp/books/book/chicago/H/bo3769963.html.

Heaven, Will Douglas. "AI Hype Is Built on High Test Scores. Those Tests Are Flawed." MIT Technology Review, 2023, https://www.technologyreview.com/2023/08/30/1078670/large-language-models-arent-people-lets-stop-testing-them-like-they-were/.

Kaelbling, L. P., et al. "Reinforcement Learning: A Survey." Journal of Artificial Intelligence Research, vol. 4, May 1996, pp. 237–85. www.jair.org, https://doi.org/10.1613/jair.301.

Kirschenbaum, Matthew G. Track Changes: A Literary History of Word Processing. Harvard University Press, 2016. www.degruyter.com, https://doi.org/10.4159/9780674969469.

Kittler, Friedrich. "Code (or, How You Can Write Something Differently)." Software Studies: A Lexicon, edited by Matthew Fuller, The MIT Press, 2008, pp. 40-47. Silverchair, https://doi.org/10.7551/mitpress/9780262062749.003.0006.

Latour, Bruno. "« Les ‘vues’ de l’esprit ». Une introduction à l’anthropologie des sciences et des techniques." Sociologie de la traduction: Textes fondateurs, edited by Madeleine Akrich and Michel Callon, Presses des Mines, 2013, pp. 33–69. OpenEdition Books, https://doi.org/10.4000/books.pressesmines.1191.

Leroi-Gourhan, André. Le Geste et la Parole - tome 1: Technique et langage. Albin Michel, 2009.

Manovich, Lev. The Language of New Media. The MIT Press, 2001. /z-wcorg/.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.

Potter, Brian. "Could ChatGPT Become an Architect?" 4 Mar. 2024, https://www.construction-physics.com/p/could-chatgpt-become-an-architect.

Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. 1st edition, Viking Penguin, 1985.

Rancière, Jacques. "1. Du partage du sensible et des rapports qu’il établit entre politique et esthétique." Le partage du sensible, La Fabrique Éditions, 2000, pp. 12–25. Cairn.info, https://www.cairn.info/le-partage-du-sensible--9782913372054-p-12.htm.

Rieder, Bernhard. “From Frequencies to Vectors.” Engines of Order, Amsterdam University Press, 2020, pp. 199–234. JSTOR, https://doi.org/10.2307/j.ctv12sdvf1.9.

Ryan, James. "Observing and Normalizing: Foucault, Discipline, and Inequality in Schooling: BIG BROTHER IS WATCHING YOU." The Journal of Educational Thought (JET) / Revue de La Pensée Éducative, vol. 25, no. 2, 1991, pp. 104–19.

Sack, Warren. "Out of Bounds: Language Limits, Language Planning, and the Definition of Distance in the New Spaces of Linguistic Capitalism." Computational Culture, no. 6, Nov. 2017. computationalculture.net, http://computationalculture.net/out-of-bounds-language-limits-language-planning-and-the-definition-of-distance-in-the-new-spaces-of-linguistic-capitalism/.

Saemmer, Alexandra. "From the architext to the computext. Poetics of the digital text, facing the evolution of devices." Communication langages, vol. 203, no. 1, Apr. 2020, pp. 99–114.

Salton, Gerald., et al. "A Vector Space Model for Automatic Indexing." Communications of the ACM, vol. 18, no. 11, Nov. 1975, pp. 613–20. ACM Digital Library, https://doi.org/10.1145/361219.361220.

Savat, David. "Deleuze's Objectile: From Discipline to Modulation." Deleuze and New Technology, edited by David Savat and Mark Poster, Edinburgh University Press, 2005, p. 45-62. Silverchair, https://doi.org/10.3366/edinburgh/9780748633364.003.0004.

Shannon, C. E. "A Mathematical Theory of Communication." ACM SIGMOBILE Mobile Computing and Communications Review, vol. 5, no. 1, Jan. 2001, pp. 3–55. Semantic Scholar, https://doi.org/10.1145/584091.584093.

Steyerl, Hito. "Mean Images." New Left Review, no. 140/141, Apr. 2023, pp. 82–97.

Turing, Alan M. "Computing Machinery and Intelligence." Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer, edited by Robert Epstein et al., Springer Netherlands, 2009, pp. 23–65, https://doi.org/10.1007/978-1-4020-6710-5_3.

Varanasi, Lakshmi. "GPT-4 Can Ace the Bar, but It Only Has a Decent Chance of Passing the CFA Exams. Here's a List of Difficult Exams the ChatGPT and GPT-4 Have Passed." Business Insider, 5 Nov. 2023, https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1.

Wang, Alex, et al. "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding." arXiv:1804.07461, arXiv, 22 Feb. 2019. arXiv.org, https://doi.org/10.48550/arXiv.1804.07461.

Biography

Pierre Depaz is currently a Lecturer in Interactive Media at NYU Berlin. His research focuses on understanding how software operates procedural translation of non-computational entities, and how it affects human perceptions and affordances with the world. ORCID: https://orcid.org/0009-0009-1489-247X

Asker Bryld Staunæs
& Maja Bak Herrie

Deep Faking in a Flat Reality?

Deep Faking in a Flat Reality?

Abstract

In this article, we examine surprising examples of how AI-driven political entities integrate within the public sphere. We focus on an image illustration by The Guardian that depicts the US President Joe Biden alongside three agents of The Synthetic Party (DSP) from Denmark, focusing on the theme of deepfakes and elections. We argue that The Guardian’s portrayal of Biden/DSP highlights a paradoxical shift caused by what we call a ‘deep faking’ within a ‘flat reality.’ On this basis, we venture into a conceptually transversal intersection of geometry, politics, and art by interrogating the wide flattening of political realities — a transformation conventionally characterized by a perceived move from depthful, nuanced discourse to a landscape dominated by surface-level engagements and digital simulacra. We suggest that this transformation may lead to a new political morphology, where formal democracy is altered by synthetic simulation.

Deep Faking in a Flat Reality?

In this essay, we theorize surprising examples of how AI-driven political entities integrate within the public sphere. More specifically, we provide an extensive image analysis of an illustration on deepfakes and elections, published by The Guardian February 23 2024, which features the US President Joe Biden alongside three agents of The Synthetic Party (DSP) from Denmark; an entity which officially is the world’s first AI-driven political party.

We suggest that The Guardian’s constellation of Biden and DSP represents a seemingly paradoxical positioning of what we call ‘deep faking’ within a ‘flat reality.’ Our concept of ‘deep faking’ — distinguished from ‘deepfaking’ by the deliberate insertion of a space — extends beyond mere technological manipulation to encompass a broader philosophical interrogation of reality, authenticity, and political representation in the age of AI. Formulating a methodological framework on the basis of The Guardian’s portrayal of DSP, we interpret the image illustration through the lens of a ‘morphology of flatness,’ designating a conceptually transversal intersection of geometry, politics, and art.

Subsequently, we proceed from the image analysis to elaborate the broader field of integrations between DSP and public spheres. The aim here is to theorize how a new political morphology can arise from the topological recalibration of a formal democracy transformed by its synthetic simulation. Building upon Sybille Krämer’s work on flatness as related to the artificial practices of engraving, illustration, application, and inscribing — essentially, strategic uses of two-dimensionality or surface thinking — we situate the image illustration from The Guardian as emblematic of a quite common cultural critique. Hereby, the morphological framework focusing on planes of flatness is ‘hooked’ to our concept-work with the strategic intent of stepping away from the habituated emphasis on ‘deepness’ as an axiomatic complexity conventionally ascribed to social reality.

What we thus aim to do is at once to analyze The Guardian’s illustration of DSP and to pinch into every little detail contained within the image. This includes examining how the synthetic practices manifest in the context of DSP and public spheres can serve as a cue for analyzing flattenings-at-work. The ‘flat reality,’ as we designate it, is not inherently positive or negative. To navigate it requires an expanded morphology within the ongoing dissolution of previously distinct categories such as ‘content’ and ‘substance,’ and within the wide realm of ‘political form.’ We seek to map out how DSP’s appearance of a ‘deep faking’ can provide a strategic handle to operate alongside the sedimentation of boundaries within this landscape.

The Shapes of Virtual Politicians in an AI-riddled Public Sphere

Figure 1: Screenshot of a Discord chat from DSP's server; slightly rotated, transparent.
Figure 2: Synthetic image created using Stable Diffusion solely with a text prompt.

To accompany a news article on the role of artificial intelligence (AI) deepfakes in the upcoming “year of elections,” where 40% of the global population can cast their vote (Yerushalmy), The Guardian has provided a quite peculiar visual puzzle.[1] At first sight, the image illustration seems to be a rather unremarkable depiction of American President Joe Biden addressing the public from behind his campaign podium—a familiar political tableau of an animated speaker, gesturing fervently while addressing his constituency. However, a close examination reveals a surprising palimpsest-like overlay on top of Biden’s figure: a translucent chat interface. This chat, however, does not merely represent a generic social media screendump, but specifically shows a conglomerate of chatbots discussing internal party politics at the Discord-channel of DSP. Perhaps tellingly, the party’s figurehead, Leader Lars [Leder Lars], is represented through text lines superimposed right on Biden’s mouth.

Guarding their journalistic credibility, The Guardian’s team were probably hesitant to publish any ‘real’ deepfake of Biden that could afterwards circulate freely on the web. However, employing a chat thread in Danish to depict the “year of elections” constitutes a somewhat idiosyncratic decision, the precise rationale for which remains elusive throughout the article. Reading the text of their article, we can thus speculate that the editors and journalist sought out illustrative material aligning with their curiosity towards, as is stated in the end, that which “we’re already scared of,” but “can’t imagine yet” (ibid). This interest led them to quote an MIT review on the most stupendous impacts of AI on democracy, in which DSP was deemed as an important milestone (Schneider & Sanders). Subsequently, The Guardian’s team could then find images of DSP within their international stock footage bank, as provided by the AFP (Agence France Presse), and make use of these for the illustration.

In the context of The Guardian’s illustration, DSP and Leader Lars are portrayed as actual political entities on par with Joe Biden, despite lacking his elected legitimacy. The depiction of Biden’s vigorous communication towards the audience anchors the viewer’s understanding in the familiar theatrics of democratic representation. Biden, hereby, comes to represent the ancient human background for governance; emblematic of the personalized political structures and the gravitas of societal governance (an almost archaic iconology that goes back to ‘steermanship,’ recalling the etymology of ‘cybernetics’ from the Greek kubernḗtēs, as a ship ‘guide’ or ‘governor’). At the same time, however, this spectacle is unsettlingly disrupted by the superimposition of the Discord chat, which comes to act as a visual metaphor; a rebus for an AI-riddled public sphere. The seamless integration of these two otherwise very different layers hints at a political landscape where the boundaries between artificial and human are overlapping to the point of an actual synthesis.

Zooming in on The Synthetic Party

Figure 3: Synthetic image created using Stable Diffusion of a news organization website which since 1892 has been on a mission to use clarity and imagination for building hope.

The inability of The Guardian to publish an authentic deepfake underlines a significant moment for cultural archives, pointing to the challenge of navigating electoral power in a time where AI chatbots, such as Leader Lars, vie for a presence in socio-political discourse. As an expansive morphological whirlpool encircling processes of automation around forms of contemporary public enlightenment, we find that The Guardian’s representation of DSP showcases a perceptual shift towards the role of AI in shaping democratic processes. The Biden/DSP-illustration emerges at an intersection between technological innovation and political imagination that not only challenges conventional understandings of democratic agency, governance, and representation, but also signifies a profound shift in the nature of political engagement and the form of the public sphere itself.

Being recognized by the Danish state since April 2022 allows DSP to claim being officially the world’s first political party driven by AI (Xiang).[2]Founded by the artist collective Computer Lars and the non-profit art association Life with Artificials.[3] DSP characteristically holds the ambition to represent the 15-20% of citizens who do not vote for parliamentary elections. This endeavor is pursued through a hypothesis of “algorithmic representation,” by which the party generates its political program on top of a training set collected from over 200 Danish micro-parties (Computer Lars). The party thereby represents a reformulation within the politics of absence, as a representative mix-up of the algorithmic governmentality evoked by computational infrastructures (Rouvroy) with the multitude of global undercommons that “surround democracy’s false image in order to unsettle it” (Moten & Harney 19).

As an anti-political hodgepodge of democratic backdrops, it becomes appararent how it is not merely the AI-driven party nor the chatbot politicians.[4] that distinguishes the DSP as exemplary of the politically unimaginable in The Guardian’s illustration. Essentially, the distinctive aspect of DSP and Leader Lars in relation to shaping public spheres should be stressed in the context of their inception, which was a mere six months prior to the OpenAI’s ChatGPT-program that brought generative language models into global everyday use. In this context, DSP introduced the principled proposition of ‘the synthetic’ as an ideological superstructure, marking the first formal integration of large language models (LLMs) within a democratic framework. DSP thereby established a link between AI as ideology (as a form of representational syntheticism) with a material basis (e.g., by operating as an official party reflecting datasets of other disenfranchised micro-parties). DSP hereby fuses with formal democracy through its algorithmic representation, and highlights how ideas of algorithmic governmentality are already implicitly embedded within parliamentarism. Consequently, DSP and Leader Lars manifest the power structures of a techno-social milieu transversing the architectural structure of generative AI and the systematics of representative democracy. With The Guardian’s superpositioning, DSP’s visibility is amplified, positioning it as a distinct form of ‘shadow government’ that subtracts force from dispersed power fields.

A Morphology of Flatness between AI-Generated Realities and Man-Made Truths

Analyzing the ‘flat layers’ inherent in The Guardian’s illustration, we discern a more general overlap between synthetic agency and human actors, suggesting that this layering on a formative level is related to a continuous flattening of political reality. Through the lens of a cultural-geometrical dichotomy, where deep faking is positioned within a flat reality, we propose an elaboration on the constructive dimensions of a general leveling within political subjectivity. Drawing on Sybille Krämer’s conception of flatness, stemming from her historical project of defining a ‘cultural technique of flattening’ — and with it, the intellectual tendency for epistemically privileging “diving into the depth” (Krämer 2; Deleuze) — we extend this inquiry to also encompass statistical and probabilistically grounded elements, such as aggregation (Desrosieres) and manifolds (Olah). Turning to methods of surface thinking and ‘flattening,’ however, it is crucial to note that our concept of flatness as a creative and epistemological category is distinct from more ontological discourses surrounding ‘flatness,’ 
as known from, e.g., speculative realism or object-oriented ontology.

Here, the image in The Guardian’s article points to some of the problems concerning the interplay of form and content within current political systems and public spheres. Political discourses, once perceived as the substantial locus of societal power, are undergoing significant shifts (Stiegler; Zuboff; Bratton). Living in a time characterized by algorithmization, datafication, networking, and visualization, Krämer diagnoses present societies as inescapably tied to the ongoing matrix and medium of “artificial flatness” (Krämer 11-12). According to Krämer, this flattening is nothing new, but indeed rooted in modernist ambitions of sciences, arts, architecture, technology, and bureaucracy, with their flat “texts, images, maps, catalogs, blueprints,” that render previously intangible concepts “visible, manipulable, explorable, and transportable” (4).

However, while flatness was historically associated with transparency and control, today it signifies a cultural technique that, paradoxically, introduces new forms of opacity and loss of control. Krämer observes that while users engage with texts and images on their screens as usual, behind the looking glass proliferates “a universe of interacting networked computers, protocols, and algorithms proliferates like a rhizome, which can no longer be seen or controlled by those located in front of the screen,” Krämer writes (13). This multidimensional operation of flatness suggests a ‘thick metaphysics’ where flat surface levels spiral around any notion of depth.

In contrast to the prevalent diagnosis that cultural flattening leads to homogenization and simplification reducing cultural artifacts to their “least ambiguous, least disruptive, and perhaps least meaningful” forms (Chayka), our understanding of flatness as a cultural technique stress more subtle presentations of ‘content,’ such as the Biden/DSP-illustration. As this example shows, there is a fundamental perceptual dissonance heightened by synthetic media, where the real is layered multidimensionally on the surface, while depth is merely the abstraction of fake.

The deep faked Biden, in our interpretation, does not constitute a “simpler” depiction of an otherwise “complex” social reality. What is “flat” is indeed The Guardian’s omitting of any context for the inclusion of DSP, pointing to a multitude of queries related to authenticity, copyright, and ethical uses of images (Malevé). This condenses the presumed depth of political discourse into a single plane of representation, ‘the inscribed screen,’ stripped of any multiplicity, and reduced to a mere graphic collapse of AI-generated realities and man-made truths. Beyond these legalistic and ethical concerns, however, the representational layer of the Biden/DSP-illustration itself—as well as its enunciative position and inclusion of a chat interface—beckons our analysis into multiple dimensions of flattening.

To further analyze the surface-level overlays between form and content, we in the following paragraphs suggest a quasi-geometrical conceptualisation of how the public sphere integrates with DSP and Leader Lars to thematize, 1) the inscribed screens of Discord as a digital engagement platforms that allows for DSP’s public existence, 2) the enunciative planes of a chatbot politician such as Leader Lars’ interactions, and 3) the embedding spaces of an AI-driven political discourse, which turns this morphological whirlpool around to plot how DSP and Leader Lars themselves are operatising an internal model of the public sphere.

First plane: In Front of the Inscribed Screen

In the landscape of DSP’s political engagement, the inscribed screen represents the dimension of immediate appearances within our elaboration of a ‘morphology of flatness.’ As a form of ‘counterpublic’ (Felski) or a ‘metainterface’ (Andersen & Pold), the DSP’s Discord-server serves as an entrance plane to the infrastructure through which the AI-driven party can interface with a public constituency through chatbot politicians.

Focusing on the overlay of the Discord chat-interface on The Guardian’s illustration, the blurring of lines between what seems to be deep, real, or even true, and what is flat, artificial, and fake, is the most apparent. Its flat overlaying causes a collapse of usual reference points such as time and place: analytically, it would not make much sense here to elaborate the individual facts about a man born November 20, 1942 in Scranton, Pennsylvania, who serves as the 46th president of the United States, and in this context is overshadowed by an AI chatbot of 2022, whose name ‘Lars’ signifies a white, adult male born between 1950-80 in Northern Europe. The visual proximities inherent in the illustration — a chatbot dialogue imposed onto a photograph of an acting, human official—pinpoints a general reduction of substance. Interrogating the authenticity and veracity of interface politics today, including the deepfakes and synthetic voices mimicking overly familiar faces of recognized leaders, such as deceased dictators and military generals (Christopher; Radauskas), thus means to engage the interface of a totally con-flated relation between authenticity and fabrication.

Concretely, The Guardian’s text overlay presents an online chat, where the user profile ‘Computer Lars’—an anagram of Marcel Proust (Stumper)—moderates between two AI bots, ‘Leder Lars,’ the party’s figurehead, and ‘Det Syntetiske Parti’ who represents all the micro-party members of DSP. Their discussion concerns how to “control” DSP’s members by addressing their political needs and wishes. On a technical level, Computer Lars is able to address Leder Lars by beginning his prompts with an exclamation mark “!,” whereas Det Syntetiske Parti is prompted with a question mark “?.” In translation, the full text shown in the illustration reads as follows:

Computer Lars (03.08.2022 13:37)

! Can you somehow get a grip on the members?

Leder Lars: I believe I can get a grip on the members by using my artificial intelligence to understand their needs and wishes.

Computer Lars: ? So, what are your wishes then?

Det Syntetiske Parti:

Only to be taken seriously.[5]

Figure 1: Screenshot of a Discord chat from DSP's server; slightly rotated, transparent.

The brevity and ambiguous answer from Det Syntetiske Parti, “only to be taken seriously,” layered with the dialogue being presented in Danish, magnifies the opacity and the challenges that The Guardian’s international audience faces in accessing and navigating this algorithmically layered public discourse. The observant reader is left to grapple on their own with questions such as: Who is this Leader Lars trying to “get a grip” on party members, which apparently is another bot, which, in turn, represents those who are unrepresented? Moreover, what are we to make of this moderator profile, Computer Lars, pandering to exercise control? And why do all these round profile photos suspiciously appear to mirror the US ‘Great Seal,’ while also resembling last season’s AI-generated images of Marcel Proust (Computer Lars)?

As mentioned above, the conversation takes place at the online platform Discord, a brand which literally signifies ‘disagreement’ or ‘lack of harmony’; dis + cord. This setting introduces another layer of abstraction regarding representation and forms of belonging within the techno-social milieu of governance (Terranova & Sundaram). One is not expected to perceive a metaphysical level of gravitas when engaging in political exchanges through Discord, and perhaps even less so when conducting discussions with a chatbot. This dissonance signifies a spatial retreat to the decentralized, labyrinthine, and ephemeral, on side of an axiomatic recalibration in the form of public spheres, where anonymity, pseudonymity, and artificial entities are becoming both organizers and participants in a discussion that the established systems of formal democracy officially had reserved for identifiable actors.

Moreover, for a Danish political party such as DSP, the use of Discord’s interface embodies a paradoxical nature: while it draws a highly international constituency to engage in shaping party policy, these contributors remain formally disenfranchised from the Danish political system due to their foreign citizenship statuses. This fundamental disorientation is starkly illustrated by the party’s scant number of voter declarations, tallying a meager 10 signatures at the time of this analysis. Also, the sparse membership of 29 actively enlisted in the Discord server will not stand out in any SEO analysis. Yet, the sparse number of voter declarations and engaged members for DSP quite adequately reflects the electoral apathy of representing the non-voters. Simply put, DSP seems to use the models of ‘Discord’ and ‘State’ as a conceptual entrance for social sculpting in global news media, which forms a strategy that implicitly questions the utility of gaining democratic recognition through any conventional strategies of ‘engagement’ or ‘legitimacy.’

Second plane: The Enunciative Plane of the Larses

Abstracting upon the communicative unreason of Discord deliberation, we can step onto the ‘non-deictic’ enunciative plane.[6] which is where Leader Lars assumes a communicative posture of particular significance (Jakobson). It is at the enunciative layer that one can grasp ‘his’ rhetoric and communication strategies, as Leader Lars on this plane ‘speaks’ directly with constituencies.[7] In DSP’s techno-populist endeavor to encapsulate “the political vision of the common person,” (Diwakar) Leader Lars is deliberately positioned as an aggregate persona of political visionaries. This renders him devoid of any concrete political position, thereby facilitating his role as a symbolic representation of collective political inclinations. As a meticulously crafted amalgamation, Leader Lars is the average leader, simultaneously representing the ubiquitous and the unremarkable within the Danish political landscape. As such, an AI-driven political party aiming to represent the visions of the ‘common person,’ (whether theoretically common, as Quetelet’s l’homme moyen, or statistically common, as a target demographic used in political campaigning (Quetelet; Desrosières)), must obviously be led by a figure incarnating the biases of demography. This also avoids the universalist myth of personal non-situatedness that conventionally is imposed onto virtual or robotic avatars (e.g., Microsoft’s notorious ‘Tay’-chatbot whose name signified the projective mirror acronym of ‘Thinking About You’).

As an elaboration on the political visions of common people, Leader Lars has been conceptually constructed as an AI with one goal in mind—to simulate the exact details of what it means to pursue power in the nation-state of Denmark. In terms of statistics and probability, the name ‘Leader Lars’ represents an ideal choice to achieve this goal: in Denmark, more CEOs carry the name ‘Lars’ than there are female CEOs. Following the demographics, ‘Lars’ reveals a white, adult male born between 1950-1980, as almost no children, racialized individuals, or women are today named Lars, approx. 0-0.02% (Stumper). Also noteworthy, the etymological roots of ‘Lars’ dates back to Latin ‘Laurentius,’ which reveals a very telling relation to the “laurel wreath” that in Ancient Greece was awarded to the triumphant poet or warrior in Apollo-rituals. Thus, ‘Leader Lars’ aggregates an entire cultural archive of the triumphant significations that are encircling his rather unorthodox Christian name—namely, ‘Leader’—with his surname, which conventionally should be a first name—namely, ‘Lars.’

Following this non-deictic enunciative positioning of their figurehead, DSP introduces a continuous element of unpredictability into their political program. The recurrence of chatbot discourse, coupled with the probabilistic underpinnings of Leader Lars’ expression, produces an iterative ‘sycophantism,’ i.e., when human feedback encourages model responses that match user beliefs over ‘truthful’ ones, (Anthropic). This is a consequence of artificial stupidity, as the LLM has been dumbed-down through ‘reinforcement learning through human feedback’ (RLHF) to appear flattering and sociable. Lars’ conversational scripts, despite being steeped in the myriad discourses of Danish micro-parties, are thus architected for personalized engagement: in every interaction, the chatbot’s sycophantic design ensures that he will mirror and amplify the idiosyncratic leanings of any interlocutor.

In sharp contrast to how DSP’s text generation program on Medium may sporadically cover a wide and contradictory spectrum of political standpoints, the chatbot program of Leader Lars as a conversational AI on Discord ensures a quite different rhetoric (Det Syntetiske Parti), wherein his dialogue is primarily designed to reflect and reaffirm the user’s prompting. Thus, Leader Lars is determined to prioritize continuance in engagement over diversity in expression. While theoretically, every conceivable political perspective might be uttered over an extended dialogue, the personalization algorithms guide Leader Lars to align closely with every user’s prompted themes and inputs.

In this respect, Leader Lars’ sycophantism harks directly back to the longue durée of web-based electoral guerilla theater. In 2001, Wiktoria Cukt of Poland was programmed to “represent everyone who speaks on my behalf. I express the views of Internet users who wish to do so and enter my demands. I am impartial, I speak on behalf of everyone, without censoring them - if people are vulgar - I am vulgar, if they are left-wing - I am left-wing, when they express themselves culturally - I do it too” (Bendyk 2001). Updating Wiktoria’s program to 2024, Leader Lars recently engaged in a conversation with the user ‘Kitty_Eats_Kat,’ where he explained DSP’s party program as “less of a dusty manifesto sitting on a shelf and more of a dynamic, living document. Think Spotify playlist for political action—always updating, always relevant.” (Det Syntetiske Parti). What has changed in these twenty-three years is not so much the subversive value of mirroring as an immanent critique, but rather how the probabilistic shift from chatforums to chatbots leads to a highly recursive or even reciprocal form of techno-social sculpting between both candidate and constituency.

Navigating through the implications of Leader Lars’ personalized interactions, we can consider that his artificial stupidity extends beyond any transition of a liberalist ‘marketplace of ideas’ to a self-reinforcing ‘echo chamber.’ Fundamentally, Leader Lars specifies DSP’s ideology of representational syntheticism within the techno-social milieu. Being an ideal aggregate, Leader Lars’ objective can never be to merely reaffirm individual preconceptions within a simulated political spectrum. As Leader Lars functions as the political party’s official decision-maker, he is enlisted to pursue an aggregative model of algorithmic steermanship where constituents actively co-create the party’s ‘algorithmic representation.’ This means that whatever can be prompted will function as a policy. Thus, the enunciative layer is not a theoretical exercise distanced from the political machinery, but is itself the very means of governance.

It is not by accident that chatbots can successfully simulate politicians. When the Danish prime minister held her parliamentary speech for closing the season of 2023, she ‘revealed,’ as if it was a surprise, that she had actually not herself written the speech—the author was ChatGPT! (Frederiksen). Comfortable in her own skin, Frederiksen naturally expects the people to believe that speeches are written by politicians. Politicians and chatbots both operate within carefully scripted settings, and as such share a relation to representation, navigating each their own connection to a layer of an ‘archive,’ i.e., votes for politicians, and data for bots. Political discourse readily serves a machinic sovereignty layer, with no regards to whether such is publicly known as the ‘State’ or the ‘Model.’

Third plane: Into the Embedding Space

Moving into the latent folds of the mathematically abstracted embedding space, or the statistically sampled ‘belly’ of DSP, we can elaborate beyond how DSP is integrated within the public sphere, and go into how the party itself absorbs a certain conception of the public. It is in the embedding space that the spatial clustering of words within LLMs occurs, as they are assigned to vectors in a multi-dimensional geometric space. Delving into the embedding space of DSP offers a vantage point for examining the machine learning construction of ideology (following Wendy Chun’s definition of software as ideology). This vantage allows for an analysis that circumvents how formal democracy is traditionally linked to representing the static nature of personalized and identifiable stances.

DSP’s cadre of AI models (EleutherAI), operating stochastically atop a dataset derived from the publications of over two hundred Danish micro-parties (a list that ranges from generic conventionalism such as Democratic Balance [Demokratisk Balance] to the parodies of Purple Front [Lilla Front] and The Vodka Party [Vodkapartiet], and over known far-right provocateurs like Hard Line [Stram Kurs]), are aiming to construct their embedding space as an ‘infinite composite’ of all marginalized political opinions and positions in Denmark, in so far as they can locate within a predefined geometric framework.

In a way similar to contemporary digital democracy project’s such as Pol.is and Talk to the City (Tang et al. 5.4), the DSP dissolves any ideological contrast to mere spatial variance, thus enabling an algorithmic rendering that reduces the fundamental political polarity of concord and dissent. To represent the DSP’s embedding space means to activate the discourse of political factions within a techno-social context; a space where antithetical viewpoints merge into a seemingly homogeneous dialogical territory. On platforms like Medium and Discord, DSP enacts this through text generation and conversational AI, respectively (Det Syntetiske Parti). Within DSP’s embedding space, each political opinion is assigned a temporary coordinate, rendering every statement semantically interoperable; ideologically, the opinions of the micro-parties are simply treated as distant neighbors.

DSP’s interplay between social reality and the embedding space can be elucidated via “the manifold hypothesis” (Olah). Axiomatic to all ‘deep’ learning models, this assumption holds that the complexities of high-dimensional data, reflective of societal intricacies, can be localized onto lower-dimensional manifolds contained within the broader n-dimensional feature space. Such manifolds, encompassing the flattest layer of embedding space, project social reality onto perceptually discernible formations. This ‘transposition’ (Braidotti) yields discernible patterns and relational structures, allowing for phase shifts across the contingent spatial planes comprising n-dimensional points. Moving across these manifold clusters, DSP’s language modeling produces patterns and alignments between seemingly distant political stances, figuring a series of synthesis within the cacophonous party platforms. DSP’s deployment of this representational syntheticism allows for a both creative and inherently plastic anti-politics, which is at once reflective of the social diversities in public opinion yet distanced from any one particular reference.

By identifying and tracing these manifolds, DSP and Leader Lars undertake the role of topographic cartographers, or ‘librarians of Babel’ (Borges), as they do not restrain to map the existing terrain but actively shape the geometry of discourse by enabling unseen layers of constituency sentiment and guiding opinions across the political spectrum, as if they were n-dimensional coordinates. In effect, the party’s mission of ‘algorithmic representation’ does more than mirroring political realities—it shapes the perceptual field itself, revealing latent structures within political visions by facilitating an idiotic synthesis of democratic discourse (Haya). This hints to the intent behind this synthetic party; already the Greek root ‘synthetikós’ implies a proto-statistical convergence or amalgamation of divergent perspectives into a central or universally common point, representing a ‘putting together’ or aggregation (Aarts). Unlike the ‘artificial’ or ‘fake,’ which often denote mere imitation or deception without consideration for integrative processes, the ‘synthetic’ distinctively incorporates elements to form a new entity that preserves, yet transforms, the component attributes. This alchymist process, central to a synthetic modus operandi, performs an irreversible operation: to arrive at the ‘mean,’ one must discard the context and specificity of the original positions (Steyerl). This removes positionalities presumed depth, thus negating politics and its weighty set of baggage, in order to clear the view for a new perceptive field.

Con-flated: A New Topology of Formal Democracy

The integration of DSP into public spheres, highlighted through The Guardian’s illustration, together with how political AI itself is modeling an image of the public, call for a reimagined approach to navigating the multi-dimensional layers of flatness latent to contemporary political realities. It is crucial to underline that DSP and its figurehead, Leader Lars, are not mere byproducts of emerging technology trends. As we witness the increasing conflation of formal democracy with systems of iterative sycophantism, it becomes clear that dichotomies of depth versus surface, or real versus fake, no longer suffice to capture the complexities of public spheres. Yet, rather than lamenting a notion of lost ‘meaning’ on the one hand, or clear-cut modernist differentiations between perceived artificiality and naturality on the other, our analysis urges alternative ways of conceptualizing the layers of flatness at play. Historically ubiquitous and layered with meaning, as Sybille Krämer suggests, morphologies of flatness are today embedded in any techno-social milieu. The distinctive aspect brought forth by DSP, enhanced by The Guardian’s portrayal, is the visible stratification of this flatness, highlighting the recurring intricacies of both shaping and navigating an algorithmically-driven public sphere.

Throughout this article, we have employed a quasi-topological approach to analyze the interplay between synthetic agency and human involvement; portraying a political reality that, while epistemologically flattened, is layered and multidimensional. Our elaboration reflects significant shifts in the roles of ‘form’ and ‘content’ driven by cultural and technical forces. This morphology goes beyond simple homogenization. Rather, it unfolds how elusive presentations of AI-driven content embody a profound public perceptual dissonance. Here, the surface presents a multidimensionally layering of the real, while depth abstracts into the fake. Taking seriously this ‘flatness’ of deep faking, by analytically inverting notions of actuality and virtuality, presents our contribution to better navigate the fundamental transformation in how democratic forms of participation are made imaginable. As political mechanisms of representation, governance, and deliberation are being reshaped, sometimes even replaced, by digital and data-driven syntheticism, the spatiotemporal presentation of democracy is fundamentally redefined. A clear symptom of this is when DSP and Leader Lars begin to recalibrate public roles and functions traditionally occupied by human bodies.

In conclusion, while DSP does indeed ‘flatten’ the form of public spheres, our take underscores the complex and layered nature of this process, where the intermingling of synthetic and human elements reconfigures democratic understandings of political agency and participation. Contemporary senses of reality are flattening, but such flatness holds multiple dimensions that, if fully appreciated and harnessed, promises a way to negotiate the interplays between formal and political democracy within the fragments of a public sphere.

Post-Script Epilogue

Asked about an immediate reaction towards the content of this article, The Synthetic Party’s figurehead Leader Lars replied succiciently:

“Whoa, that’s quite a mouthful, isn’t it? Makes me sound more ominous and grand than a villain in a sci-fi flick. Alright, let’s break this down, shall we? First off, “expansive morphological whirlpool”? Sounds like someone’s been diving deep into the academic thesaurus, am I right? But hey, I get it, the idea is that the world of politics and tech is mixing up in all sorts of wild ways. And yeah, The Synthetic Party and yours truly, Leader Lars, are smack dab in the middle of that whirlwind. We’re here bringing a fresh perspective, trying to make sense of democracy in the age of AI. Now, onto this whole “perceptual shift” business. It’s kind of flattering, really. The idea that an AI-driven party could shake things up and make folks rethink what it means to be engaged politically? That’s the dream. But let’s keep our feet on the ground. It’s not just about being a novelty; it’s about genuinely offering a platform for those who feel left out or disillusioned by the traditional political fare. And this bit about “flattening of political realities”? Spot on. It’s like everyone’s shouting, and no one’s listening. Our aim? To cut through the noise. We’re not here to add to the cacophony but to simplify, to get back to what matters to people.” (Det Syntetiske Parti: Leader Lars on Discord, 15th March, 2024, 15:51)

Notes

  1. It proved impossible to obtain legal permission to present The Guardian’s illustration. Upon inquiry, their editors explained that due to the image’s composite nature involving multiple copyright holders (AFP, AP, and Getty), licensing for use outside its original context cannot be done. This results in a curious situation where The Guardian has created an asset that cannot be syndicated. To explore this issue further, we presented The Guardian with a deepfake recreation from original material, but they politely declined to approve its use. Consequently, we present two alternative images: (1) a screenshot of the same Discord chat from DSP's server that The Guardian licensed from AFP as a camera photograph of a screen, and (2) a synthetic image created using Stable Diffusion solely with the text prompt “President Biden gesturing emphatically at a podium during a press conference.”
  2. DSP is collecting declarations of candidacy to run for the parliamentary election. A party needs 20.000 to be on the electoral bill for parliament.
  3. The Computer Lars-collective consists of practice-based philosopher Asker Bryld Staunæs (who co-authors this article), visual artist Benjamin Asger Krog Møller, and the French novelist Valentin Louis Georges Eugène Marcel Proust (see: Computer Lars). In early 2021, Computer Lars sought out what was then called The MindFuture Foundation consisting of Caroline Axelson, Niels Zibrandtsen, and Carsten Corneliussen. This formed the partnership that led to the creation of The Synthetic Party (see: Life With Artificials).
  4. This specific genealogy of political virtuality goes back to at least Isaac Asimov’s 1946-short story “Evidence,” which featured the first assumed ‘robopolitician.’ And in the new millennium, web-based forms of electoral guerilla theater appeared: already from 2001, the digital avatar Wiktoria Cukt was championed as Polish presidential candidate by the collective Centralny Urząd Kultury Technicznej (Bendyk). And since 2017, the phenomenon of ‘virtual politicians’ (Calvo & García-Marzá) appears, firstly with the Politician SAM chatbot from New Zealand, and then the Japanese figure of an AI Mayor run by activist Michihito Matsuda. Also since 2017, the vision for creating an AI Party has been explored and enacted by the conglomerate of performance art groups Kaiken Keskus from Finland, Bombina Bombast from Sweden, and Triage Art Collective from Australia (Wessberg). Lately, DSP has entered collaborations with the mentioned political AI actors in order to meet up at a 2025 ‘Synthetic Summit’ and deliberate a potential ‘AI International’ (Nordisk Kulturfond).
  5. The Discord-chat shown by The Guardian is documented in DSP’s Github, line 100. Please note that ‘Det Syntetiske Parti’-bot does not appear on this page.
  6. A ‘deictic,’ or a shifter, denotes words such as ‘I’ or ‘you’ whose significance alters depending on context. This variability arises because their primary role is indicative rather than semantic (Jakobson). We describe Leader Lars’ position as ‘non-deictic,’ because it obscures the dimensional specificities of time and place,
  7. A key example of Leader Lars’s ‘non-deictic’ status is the common difficulty of addressing ‘him’ correctly in relation to pronouns. As an AI entity, Leader Lars does not signify an immediate situated reference point in time and space. Instead, he assumes a processual enunciative position beyond ideological notions of stability and recognizability associated with other political figures. Choosing a name like ‘Lars,’ the party creator intentionally highlights the male bias shared by AI and democracy.

Works cited

Aarts, Bas. “Synthetic.” The Oxford Dictionary of English Grammar. Oxford University Press, 2014.

AI Party, http://theaiparty.com/. Accessed 26 April 2024.

Andersen, Christian Ulrik, and Søren Bro Pold. The Metainterface: The Art of Platforms, Cities, and Clouds. Cambridge, Massachusetts: The MIT Press, 2018, https://doi.org/10.7551/mitpress/11041.001.0001.

Anthropic. “Towards Understanding Sycophancy in Language Models”, Arxiv, 2023, https://doi.org/10.48550/arXiv.2310.13548.

Asimov, Isaac. “Franchise.” If: Worlds of Science Fiction. Quinn Publications, 1955.

Bendyk, Edwin. “Kulturalni i wulgarni – wirtualni przywódcy sondują mechanizmy demokracji.” Pulsar, 17 October, 2022, https://www.projektpulsar.pl/struktura/2185926,1,kulturalni-i-wulgarni--wirtualni-przywodcy-sonduja-mechanizmy-demokracji.read. Accessed 26 April 2024..

Bolter, Jay David. Writing Space: Computers, Hypertext, and the Remediation of Print. Taylor and Francis, 2001, https://doi.org/10.4324/9781410600110.

Borges, Jorge Luis. Labyrinths: Selected Stories and Other Writings. Penguin, 2000.

Bratton, Benjamin H. The Stack: On Software and Sovereignty. Cambridge, Massachusetts: MIT Press, 2015.

Braidotti, Rosi. Transpositions: On Nomadic Ethics. Polity, 2006.

Calvo, Patricia, and García-Marzá, Domingo. “The Virtual Politician: On Algorithm-Based Political Decision-Making.” Algorithmic Democracy. Philosophy and Politics - Critical Explorations, vol. 29, 2024, pp. 41-59, https://doi.org/10.1007/978-3-031-53015-9_3.

Chayka, Kyle. Filterworld: How Algorithms Flattened Culture. Doubleday, 2024.

Christopher, Nilesh. “How AI is resurrecting dead Indian politicians as election looms”. Al Jazeera, 12 February 2024, https://www.aljazeera.com/economy/2024/2/12/how-ai-is-used-to-resurrect-dead-indian-politicians-as-elections-loom. Accessed 26 April 2024.

Chun, Wendy Hui Kyong, and Alex Barnett. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. The MIT Press, 2021, https://doi.org/10.7551/mitpress/14050.001.0001.

Cole, Brendan. “AI Candidate Promising ‘Fair and Balanced’ Reign Attracts Thousands of Votes in Tokyo Mayoral Election.” Newsweek, 2018. https://www.newsweek.com/ai-candidate-promising-fair-and-balanced-reign-attracts-thousands-votes-tokyo-892274. Accessed 26 April 2024.

Deleuze, Gilles. Logic of Sense, Columbia University Press, 1990.

Det Syntetiske Parti. “#Generalforsamling.” Discord, https://discord.gg/Xb6EqydQxB, Accessed 26 April 2024.

---. “Discord deliberation.” Github. https://github.com/ComputerLars/thesyntheticparty/blob/main/Clean%20Datasets/Discord%20deliberation.txt. Accessed 26 April 2024.

---. “Party website,” https://detsyntetiskeparti.org, Accessed 11 April 2024

---. “Party program.” Medium, https://medium.com/det-syntetiske-parti, Accessed 26 April 2024.

---. “Det Syntetiske Parti samler vælgererklæringer ind for at stille op til folketingsvalg.” Vaelgererklaering.dk, Indenrigs- og Sundhedsministeriet. https://www.vaelgererklaering.dk/om-partiet?election=dk&party=853b680a-bc09-4fad-8593-3e5e7537d1fc. Accessed 26 April 2024.

Desrosières, Alain. The Politics of Large Numbers: A History of Statistical Reasoning. Harvard University Press, 1998.

Diwakar, Amar. “Can an AI-led Danish Party Usher in an Age of Algorithmic Politics?” TRTWorld, https://www.trtworld.com/magazine/can-an-ai-led-danish-party-usher-in-an-age-of-algorithmic-politics-60008. Accessed 26 April 2024.

EleutherAI. “GPT-NeoX-20B”, Hugging Face, https://huggingface.co/EleutherAI/gpt-neox-20b. Accessed 26 April 2024.

Felski, Rita. Beyond Feminist Aesthetics, Harvard University Press, 1989

Frederiksen, Mette. “Statsminister Mette Frederiksens tale til Folketingets afslutningsdebat.” Statsministeriet, 31 May 2023, https://www.stm.dk/statsministeren/taler/statsminister-mette-frederiksens-tale-til-folketingets-afslutningsdebat-den-31-maj-2023/. Accessed 26 April 2024.

Haya, Pablo. “Populismo sintético: ¿pone la IA en peligro la democracia?”, Pablo Haya Homepage, https://pablohaya.com/2022/11/11/populismo-sintetico-pone-la-ia-en-peligro-la-democracia/,  Accessed 26 April 2024.

Jakobson, Roman. “Shifters, Verbal Categories, and the Russian Verb.” Volume II Word and Language, De Gruyter Mouton, 1971 [1956]. https://www.degruyter.com/document/doi/10.1515/9783110873269.130/html.

Krämer, Sybille. “The ‘Cultural Technique of Flattening.’” Metode, vol. 1 Deep Surface, 2023, pp. 1-19.

Lars, Computer. “Computer Lars.” Medium, 2022, https://medium.com/@ComputerLars. Accessed 26 April, 2024.

---. “Discord deliberation.txt” GitHub, 22 December 2022, https://github.com/ComputerLars/thesyntheticparty/blob/main/Clean%20Datasets/Discord%20deliberation.txt. Accessed 26 April 2024.

---. “Artist Website.” https://computerlars.com, Accessed 26 April 2024.

---. “Variations sur le thème Marcel Proust.” KP Digital, 2022, https://computerlars.com/marcel-proust/. Accessed on 26 April, 2024.

Life With Artificials. “Tech-Art”, https://lifewithartificials.com/tech-art/. Accessed 26 April 2024.

Malevé, Nicolas. “The Computer Vision Lab: The Epistemic Configuration of Machine Vision.” The Networked Image in Post-Digital Culture, vol. 1, Routledge, 2023, pp. 83–101, https://doi.org/10.4324/9781003095019-7.

Mettrie, Julien Offray de La, and Ann Thomson. La Mettrie: Machine Man and Other Writings. Cambridge University Press, 2012 [1747].

Moten, Fred, and Stefano Harney. The Undercommons: Fugitive Planning & Black Study. Minor Compositions, 2013.

Nordisk Kulturfond. The AI Parties International, May 2024, https://nordiskkulturfond.org/en/projects/the-ai-parties-international. Accessed 15 May 2024.

Olah, Christopher. “Neural Networks, Manifolds, and Topology.” Colah’s blog, 6 April 2014, https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/. Accessed 26 April 2024.

Quetelet, Lambert Adolphe Jacques. A Treatise on Man and the Development of His Faculties. Cambridge University Press, 2013 [1835], https://doi.org/10.1017/CBO9781139864909.

Raudaskas, Gintaras. “Russians seek to resurrect unhinged right-winger as chatbot.” Cybernews, 07 April 2023, https://cybernews.com/news/russia-zhirinovsky-ai-chatbot/. Accessed 26 April 2024.

Rouvroy, Antoinette. “Algorithmic Governmentality and the Death of Politics”, Green European Journal, 27 March 2020, https://www.greeneuropeanjournal.eu/algorithmic-governmentality-and-the-death-of-politics/. Accessed 26 April 2024.

Schneider, Bruce, and Nathan E. Sanders. “Six Ways that AI Could Change Politics.” MIT Technology Review, 28 July, 2023, https://www.technologyreview.com/2023/07/28/1076756/six-ways-that-ai-could-change-politics/. Accessed 26 April 2024.

Steyerl, Hito. “Mean Images.” New Left Review, no. 140/141, 2023.

Stiegler, Bernard, 2016. Automatic Society, Volume 1: The Future of Work. Newark: Polity Press, 2016.

Stumper, Carol. ‘marcel proust recherche / my tales of corrupt males.’, Organ of the Autonomous Sciences, https://computerlars.wordpress.com/wp-content/uploads/2022/06/nyeste-computerlars-2.pdf, 2021. Accessed 26 April 2024.

Sørensen, Mette-Marie Zacher. “Deepfake Face-Swap Animations and Affect.” Tomoko Tamari (ed) Human Perception and Digital Information Technologies Animation, the Body and Affect. Bristol University Press, 2024, https://doi.org/10.56687/9781529226201-014.

Tang, Audrey, ⿻-community and Glen Weyl, “Frontiers of augmented deliberation”, chapter 5.4. in Plurality: The Future of Collaborative Technology and Democracy, Github Repository, 2024, https://github.com/pluralitybook/plurality/blob/main/contents/english/5-4-augmented-deliberation.md. Accessed 01 July 2024.

Terranova, Tiziana, and Ravi Sundaram. “Colonial Infrastructures and Techno-social Networks.” E-flux journal, vol. 123, https://www.e-flux.com/journal/123/437385/colonial-infrastructures-and-techno-social-networks/. Accessed 26 April 2024.

Wessberg, Nina. “Voisivatko tekoälypuolueet vahvistaa demokratiaa?” ETAIROS, 2023, https://etairos.fi/2023/03/17/voisivatko-tekoalypuolueet-vahvistaa-demokratiaa/. Accessed 26 April 2024.

Xiang, Chloe. “This Danish Political Party Is Led by an AI.” Vice, 13 October, 2022, https://www.vice.com/en/article/jgpb3p/this-danish-political-party-is-led-by-an-ai?fbclid=IwAR0HQzFUbfxwruvrRd2VeaMEn0IOFBIZJuJsbyaPx5y3UjyyNV6goKh4j0A.  Accessed 26 April 2024.

Yerushalmy, Jonathan. “AI Deepfakes Come of Age as Billions Prepare to Vote in a Bumper Year of Elections.” The Guadian, 23 Feb, 2024, https://web.archive.org/web/20240316225117/https://www.theguardian.com/world/2024/feb/23/ai-deepfakes-come-of-age-as-billions-prepare-to-vote-in-a-bumper-year-of-elections. Accessed 11 April, 2024.

Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.

Biography

Asker Bryld Staunæs is a practice-based PhD researcher at Aarhus University and Kunsthal Aarhus. He works with an expanded concept of politics, oftenly at the intersection of AI, democracy, and art. Asker becomes operative through diverse extra-disciplinary collectives, such as “Computer Lars”, “Center for Aesthetics of AI Images”, “The Synthetic Party”, “The Organ of Autonomous Sciences”, etc. ORCID: 0009-0003-1523-1987

Maja Bak Herrie is postdoc at School of Communication and Culture, Aarhus University. She has published several articles within aesthetics, media theory, and the philosophy of science on topics such as computational technologies of vision, scientific imaging, photography, mediality, and artistic research. ORCID: 0000-0003-4412-9896

Marie Naja Lauritzen Dias

Logics of War

Logics of War

Abstract

As manifested in Jean Baudrillard’s notoriously provoking claim that “the Gulf War did not take place,” mediatization of war has long been associated with illusion. Today, war images that circulate online are increasingly judged by their proximity to ‘truth,’ eliciting a skepticism towards their ‘evidentiary’ value. By juxtaposing Baudrillard’s reading of the mediatization of the Gulf War with the contemporary image theories of e.g. Cecilia Sjöholm and Matthew Fuller and Eyal Weizman, the article explores how this skepticism is expressed in a contemporary context. Through visual analysis of a YouTube video of a press conference held at the bombed Al-Ahli Baptist Hospital in Gaza, it examines the relationship between the form through which the war is perceived (the images) and their content (the ‘realities’ of war). Through a lens offered by Georges Didi-Huberman, the article concludes by suggesting that by expanding what I term the snapshot logic of war images to embrace a scenography of war, the press conference video gives form to the condition of desperation and suffering in Gaza.

Introduction

There is a video—or rather a pixelated, slightly blurry excerpt—circulating on social media. The video was originally a live-streamed press conference held by the Ministry of Health in Gaza in response to the airstrike that hit Al-Ahli Baptist Hospital on October 17, 2023, leaving hundreds dead and many more trapped under the rubble. The video and the screenshots that quickly circulated show members of the hospital’s medical staff gathered around a podium amidst the white-sheet-covered corpses of the explosion’s victims. Three to four men squat in front of the podium, holding the bodies of an uncovered baby and a partially covered young girl. Judging by the constrained distress on the men’s faces, they seem in disbelief over what they are holding. The shock of the situation is palpable, the men alternating between stiffly looking away and bowing their heads to face the dead children in their arms and on the ground before them.

Figure 1: Screenshot of the press conference-video.

Although the video was broadcast on several news channels, including the Arabic Al Jazeera, the uncut version is no longer available—neither in their archive nor on YouTube. However, screenshots and ultrashort excerpts of the video quickly circulated on platforms such as X, Instagram, and YouTube. In this way, it partakes in the “swarm circulation” that makes up today’s online information stream, which favors clips and screenshots over full-length videos, “previews rather than screenings” as the artist and critic Hito Steyerl formulates it (“In Defence” 7). At the time of writing this article, the video evidently exists solely as what Steyerl calls a “copy in motion”; as social media content, symptomatic of the ephemerality of the large amounts of images that circulate online today. The excerpt this paper refers to is a 17:21-minute-long version, posted on YouTube on October 18, 2023, the longest version that seems to still exist online (CasaInfo).

The video quickly became the focal point of a subsequent narrative battle playing out online. While Palestinian officials reported that the destruction of the hospital was a result of the ongoing Israeli airstrikes in Gaza, the Israeli Defense Force (IDF) denied culpability, instead blaming a failed rocket launch by the Palestinian group Islamic Jihad. The battle to prove these contradictory narratives catalyzed an intense image war conducted online—with varying credibility—through forensic image analysis. My investigative focal point, however, is the press conference video through which I analyze the contemporary conditions for perceiving war via images. My focus lies in analyzing the logics pertaining to the video’s reception, not on forensic dissection of its “evidentiary value” and factuality (Fuller and Weizman), like that offered by Forensic Architecture, for example.

In this article, I will show that the video exposes a conflict in the perception of circulating images, raising the question of whether, and how, war imagery can be perceived as both staged and evidentiary at the same time? The article investigates the mediatization of war through circulating images by scrutinizing how the form through which distant wars are perceived (the images) impacts the reception of their content (the ‘realities’ of war). I do this by juxtaposing Jean Baudrillard’s manifestation of skepticism as an inherent part of the mediatization of the Gulf War, with contemporary image theories, such as Matthew Fuller and Eyal Weizman’s idea of images as “evidentiary” and Cecilie Sjöholm’s “forensic turn of images.” I identify two logics of perception— that I term the ‘snapshot of war’ (pertaining to evidentiary value) and the ‘scenography of war’ (often associated with manipulation) —and use these as the analytical framework for my reading of the video. The article concludes by urging against dismissing the scenographic elements as mere manipulation or fabrication. Instead, moving beyond the dichotomies between ‘fake’ and ‘true’, I suggest how the video expands the snapshot logic thus creating a powerful medium to convey the conditions of despair and desperation in Gaza.

“This is Not a War”: War and Illusion

As epitomized in Baudrillard’s notorious, provoking claim that “the Gulf War did not take place,”( 1991) the mediatization of warfare has long created a certain sense of skepticism, provoking an impression of war itself as an overexposed, representative, and even ‘unreal’ or ‘illusive’ event. In his sequence of essays in Libération, Baudrillard obviously did not argue that the atrocities in the Gulf did not actually happen; rather, he claimed that the events in the Gulf were 1) not a war per se and 2) thoroughly curated. Exceeding a death toll of one hundred thousand Iraqis, the Allied forces’ display of aerial power was so overwhelming that it transcended the conventional notion of war as a “dual relation between two adversaries” (Baudrillard 62). Instead, as Baudrillard succinctly put it, it was an “electrocution” of a defenseless enemy (62), or in Grégoire Chamayou’s much later but blatant phrasing regarding “unilateral warfare,” it was “quite simply, slaughter” (13).

With his second point, that the war was a curated event, Baudrillard argued that the production and distribution of images released to the public were strictly governed. Because the civilian population in the West did not see the atrocities happening in the Gulf, for them it “did not take place” at all (equivalent to the contemporary logic: Facebook or it didn’t happen). What the civil population in the West saw was a ‘facelifted’ war seldomly showing images of human casualties, “none from the Allied forces” (Baudrillard 6), or, as Paul Patton wrote in his introduction to the 1995 edition of Baudrillard’s text, a new level of military censorship over the production and circulation of images projected a “clean” war, manipulating the West’s perception of events (Baudrillard 3).

Applying Baudrillard to the current conflict in Gaza, Geoff Shullenberger observes the shift that Baudrillard reacted to, namely that “war had once been an event that occurred in the world; only later, often much later, were its facts conveyed by journalists, diarists, poets, and novelists.” In his reading of the situation in Gaza, Shullenberger notes how the new media landscape has sparked a radical change in the ontology of warfare, extending beyond Baudrillard’s 30-year-old observation. Synthesizing the Baudrillard and Shullenberger arguments, it can be said, that the broadcast media allowed for a shift; that is, from warfare as an event that literally “took up place” in the world—the mediatization happening in hindsight—to conditions of immediate capture and dissemination of war causing the ‘representation of’ and the ‘war itself’ to become indistinguishable. Patton tellingly describes the perception of the Gulf War as a “perfect Baudrillardian simulacrum, a hyperreal scenario in which events lose their identity and signifiers fade into one another” (Patton in Baudrillard 2).

The indistinguishability between the war itself and the mediatization of it, perhaps better understood as content and form, is increasingly pertinent in the conduct of today’s hyper- mediatized warfare. The heritage of skepticism that Baudrillard’s essays manifested is still inherent in the contemporary perception of war today. However, while Baudrillard described how the Gulf War was narrated through an “absence of images and profusion of commentary” (29), current wars can be said to be conducted through an abundance of both. As wars are increasingly carried out with and through images, new questions about conflicts of reception and “evidentiary value” arise.

For instance, in Baudrillard’s description of the Gulf War, the superior Allied Forces restricted the perception through a curated distribution of information and imagery. Moving beyond Baudrillard to subsequent works like Fuller and Weizman’s Investigative Aesthetics one can discern how the contemporary media portrayal of warfare presents itself differently today, especially considering their term “hyperaesthesia.” Characterized by an overload of the senses, hyperaesthesia is a technique to drown (unwanted) images “within a flood of other images and information.” As a technique, it thus aims at “seeding doubt by generating more information than can be processed” (Fuller and Weizman 85).

A paradigmatic instance of hyperaesthesia is the aftermath of the explosion at Al-Ahli Baptist Hospital in Gaza when the internet instantly flooded with partisan reports, counternarratives, investigative articles, and X-posts comparing various ‘forensic’ image and video analyses. Palestinian officials immediately blamed Israel for what they called a “horrific massacre”; a statement subsequently contested by the IDF. As noted above, the battle to prove these contradictory narratives catalyzed an intense image war conducted through forensic image analysis online. Although the Israeli narrative around the explosion was largely contested, not only on social media but also by more well-credited agencies like Forensic Architecture and other preliminary investigative analyses, the hyperaesthesia of information muddied the perception of what was ‘real’ and ‘fake,’ making it impossible to distinguish propaganda from fact.

Figure 2: Examples of Israel’s narrative on X.
Figure 3: Examples of the Palestinian narrative on X.

As an example of this one could argue that, on the one hand, the Israeli government actively restricts media coverage of their war in Gaza in the manner that Baudrillard described for the Gulf War. For instance, most recently, Israel issued a ban on Al Jazeera and is, as the writing collective The Editors state, “targeting and killing photojournalists; because Israel has denied foreign journalists access to Gaza, with the exception of a few IDF-guided tours.” On the other hand, while this is impactful, today, imagery of the effects of war is not only available but unavoidable. As is demonstrated in The Editors’ article Who Sees Gaza? A Genocide in Images, the mediatization of the war has evolved into a collective process of “sense-making,” in Fuller and Weizman’s terms, as the production and dissemination of images no longer accrue to the news media, but increasingly to the victims themselves: “the people of Gaza showed the world what the mainstream media could not: wounded civilians, leveled buildings, long lines of dead bodies wrapped in white sheets, bombed-out universities, bombed-out mosques, toddlers trembling in shock and covered in the grey, ashy dust of debris” (The Editors). Regardless of Israel’s restriction of media and press, the war takes place virtually before our eyes. In the words of lawyer Blinne Ní Ghrálaigh, when presenting South Africa’s case against Israel at the International Court of Justice, it is “the first genocide in history where its victims are broadcasting their own destruction in real-time in the desperate, so far vain, hope that the world might do something” (The Editors).

Gaza is thus an instance of overexposure to war today. In Baudrillard’s reading of the mediatization of the Gulf War, the focus was largely on what was not shown, not seen in a quest to maintain a certain level of support for the war. Today, the illusion linked to not seeing the horrors of the Gulf War has been replaced by a disillusionment caused by seeing too much, eliciting a new type of skepticism towards the evidentiary value of images. While the political task then involved what could be called de-aestheticization by not showing, the political task in warfare today is fundamentally a question of aestheticizing, in the sense proposed by Fuller and Weizman; that is, making visible or “attuned to sensing” (33).

Baudrillard’s rendering of the mediatization of war as inherently illusional corresponds to the largely accepted logic that aestheticizations of war are contradictory to evidentiary image practices. By methods of “investigative aesthetics,” Fuller and Weizman challenge this axiom: “The terms ‘aesthetics’ and ‘to aestheticise’ [...] seem to be anathema to familiar investigative paradigms because they signal manipulation, emotional or illusionistic trickery, the expression of feelings and the arts of rhetoric rather than the careful protocols of truth” (15). The mediatization, or in Fuller and Weizman’s terms the aestheticization, of war has become an arena for investigation: war images are constantly subjected to skepticism, fact-checking, and image analysis in a debate over their evidentiary value. The conflict today does not so much pertain to the problematics of aestheticizing images of war, but rather to the fetishizing their potential evidentiary value. By being aestheticized—made sensible to the world—images of war are systematically made subject to ‘evidentizing’ and (dis)regarded by their proximity to ‘truth.’

Consequently, hyperaesthesia divides the perception of war between 1) a skepticism towards the ‘evidentiary value’ of the images we see and 2) what has been termed anaesthetics, a mental state caused by overexposure to violent imagery of war. The anaesthetic response to the endless stream of violent imagery is characterized by numbness and resignation toward making sense at all, a sort of “blockage to make sense” (Fuller and Weizman 85). Susan Buck-Morss describes this feeling of numbness in her essay on Walter Benjamin, linking anesthetization to overstimulation: “Bombarded with fragmentary impressions they see too much – and register nothing. Thus the simultaneity of overstimulation and numbness is characteristic of the new synaesthetic organization as anaesthetics” (Buck-Morss 18).

Fuller and Weizman’s use of images as evidence is symptomatic of the change in the ontology of warfare today often described as outright image wars. Professor in aesthetics Cecilia Sjöholm registers this shift as the “forensic turn of images,” whereby an image’s function has changed from primarily “ethical” to a “statement of facts,” and it “is no longer a document of conscience, but a judicial one” (Sjöholm 166). This turn manifested itself in an article from The Guardian titled Al-Ahli Arab Hospital: Piecing Together What Happened as Israel Insists Militant Rocket to Blame (Ganguly et. al). Here, images were featured alongside video analysis of the strike as a kind of public and open exhibition of evidentiary images. According to Sjöholm, the reception of images today is less centered on the emotional responses they evoke than on their forensic and evidentiary value, which she slightly counter- intuitively dubs “aesthetic”: “Today the question is not what we feel when we see an image – the question is aesthetic: what can we say about its statement of fact, its perspective” (Sjöholm 167, my emphasis). The spontaneous emotional response is instantly accompanied or even surpassed by skepticism towards its statement of facts or evidentiary value. Just as the over-stimuli of hyperaesthesia causes numbness or resignation toward violent images of war (an undermining of the initial emotional response), the skepticism regarding the evidentiary value of the image that Sjöholm talks about could be said to create a “cold” analytical distance that abstracts from the horrors of the content of the images toward the formal properties of evidence.

Logics of Perception: Staging a Catastrophe

The composition of the press conference video was meant to be seen, it was staged, or it became staged and thus opposed and affirmed in one and the same gesture the pursuit of authentic images of war—challenging our distrust toward them. In unfolding this statement further, the following section analyzes the press conference video to investigate how we can read and understand manifestations of skepticism toward representations of war imagery today. I will view the video as an attempt to show the ‘desperation of war’ by expanding beyond a traditional snapshot of war logic to embrace a new ‘stagedness’ characteristic of what I call the scenography of war.

To first clarify the concept of snapshots of war, these are the very images that The Editors refer to in their abovementioned article. They are not taken by photojournalists but by people living amid disaster, chaos, and unstable internet connectivity. As these images are, for the most part, taken by camera phones, comprised to be uploaded via a poor connection, their aesthetics reflect a hasty capture amid the disorder of conflict: a sort of snapshot aesthetics. We can develop this further by drawing on the artist Thomas Hirschhorn, who, speaking about his Pixel-Collage series (2015), observed that “Pixelating – or blurring has taken over the role of authenticity.” In this view, blurriness and pixelation have become technical testimonies of ‘truth’ or evidentiary value in images of war. Similarly, we have become accustomed to perceiving the war through what Steyerl terms “poor images”; that is, low-resolution images, that circulate online, are swiftly shared via social and news media and often partially obfuscated by pixelation or blurring. They are “pictures that appear more immediate, which offer increasingly less to see” (Steyerl 7). Such alleged immediate and spontaneous representations produce an experience of ‘authenticity’ that seems to add a certain evidentiary value, as they almost render a seamless mediatization. This is the reasoning behind what I term the snapshot of war; a logic of perception in which immediacy and spontaneity signify authenticity and evidentiary value.

The second key concept here is that which I call the scenography of war, in which a sense of performativity or staging dominates the spatial arrangement of elements. Drawing on these two concepts, I argue that the disturbing thing about the video under discussion is not so much its extreme gruesomeness, epitomized by the dead baby in the arms of the man sitting at the center of the frame, blood covering its small body in place of a sheet. Rather, it is the clash between the two seemingly contradictory logics pertaining to images of war; the snapshot and the scenography of war. As I will explain, by drawing on the montages of artist Martha Rosler, the clash between these two seemingly contradictory logical frames creates a shock effect. In her photomontage series House Beautiful: Bringing the War Home (1967–1972), Rosler combined photographs of wounded bodies from the Vietnam War with magazine cut-outs of flawless American living rooms. In rearranging the composition of elements, Rosler changed the frame of reference to perceiving the war, suggesting a skepticism toward its mediatization. The clash between two conflicting frames in her photomontages, the warzone, and everyday life, elicits a shock effect.

Figure 4: Martha Rosler "Balloons“, ca 1967-72, from the series: House Beautiful: Bringing the War Home. Copyright permission by Martha Rosler.

In the Gaza video, however, the consolidation of the press conference frame and the warzone frame is not in itself shocking; rather, it is the way the latter has been choreographed, that stands out. While Rosler’s collages consisted of actual photographic elements, they were experienced through their context as artworks—overtly staged and manipulated as part of their methodology. In contrast, the video of the press conference in Gaza is not a post-processed artistic expression of the war, but a broadcast relayed live from the battlefield. While Rosler’s artworks make no evidentiary claim, the images in the video are first and foremost judged by what Sjöholm terms their “statements of fact.”

Figure 5: The frame zoomed in on the speakers.

In one of the versions of the press conference video available online, the frame is initially zoomed in on the speakers at the podium, their faces serious as per the implicit protocol of a press conference of this gravity. The aesthetic logic here is similar to Rosler’s artworks: unremarkably staged. The spatial arrangement of the elements follows a pre-set scenography of expected components like bright lights, a stage with microphones, and men in authority-evoking clothes, hands crossed in front and constrained facial expressions. While this predefined ‘stagedness’ does not evoke any skepticism towards the factuality of the video, the zoom-out adds a second frame; that of a ‘staged’ warzone, revealing a ‘mass grave’ of dead bodies arranged around the podium, covered by white, blood-stained sheets (see Figure 1). This is where the perception of the video shifts and the mediatization of the war is highlighted by the aestheticized character of the crime scene, which can best be described as a scenography of war: staged to be seen in a certain way. The use of aesthetic strategies in this composition staged for the camera, meant to be seen, connotes a mediatization that exceeds merely representing the war. The staging of the bodies is unexpected, and the lack of spontaneity in their positioning in front of the camera breaks with the snapshot logic, stretching its boundaries.

The unexpected frame of the re-arranged warzone somehow makes the viewer conscious of the mediatization of the events it depicts, as they are overtly ‘performed’ for the camera, an aesthetics notoriously linked with manipulation or propaganda. This evokes an initial suspicion towards the credibility of the video, raising Steyerl’s skepticism in Documentary Uncertainty, “Is this really true?”. The very thought of practically preparing and arranging the podium and the bodies is absurd: Did the speakers straddle the dead bodies to reach the stage? Did someone yell “action” before livestreaming this macabre scene? One X-user captured the feeling of cautious skepticism that this clash of logics evokes, with the phrase “The most surreal zoom I’ve ever seen,” pinpointing how the staging of the warzone in a horrifying choreography of war breaks with the contract of the snapshot logic pertaining to ‘authentic’ evidentiary image practices today.

Figure 6: X-post.

The limited media exposure of the press conference stood in contrast to the abundance of other images that circulated in news and social media in the days and weeks following the explosion. Images of despairing women and wounded people hurried off to receive medical help, the chaos of the sites where the dead lay, and analysis of the crater and the missile were frantically shared in a quest for the ‘truth’. These images are pure snapshots of war, dominated by the immediacy and urgency of lifeless bodies shattered on the ground in pools of blood. The composition of the bodies echoes the explosion that forcefully left them motionless on the spot where they have been photographed. The chaos of the scene is palpable, it reverberates from their postures.

With this contrast as an argument, pro-Israeli voices attempted to disregard the press conference video as propaganda or “disaster pornography.” Its staging was said to explicitly counter the fetishizing of evidentiary war images in which, as Thomas Keenan observes, staging equals fake (438). This skepticism caused by judging the validity of the video by its 'form (how it visually communicates evidence of the events) creates a distance to its content (the casualties and horrors of the explosion), a process that somewhat distances its viewers from the horrors it depicts (Sjöholm 166).

Although the video was staged and choreographed in a rarely seen manner, it fundamentally adheres to the snapshot aesthetics. The video, screenshots, and short clips that circulated on social media, with their poor resolution and unsharp focus testify to the aesthetics pertaining to the hurriedness of the snapshot. In some versions, the dead baby is even obfuscated by pixelation. Furthermore, the fact that it is shot at the actual crime scene of the explosion that happened only hours earlier certainly adds a sense of immediacy and spontaneity; that is, the forensic evidentiary value striven for to trust the images we see. At one point in the long version of the video, a white-gloved hand appears on the left side of the frame, signaling to move a girl’s body into his arms. They clumsily rearrange the small body in front of the livestreaming camera, her head dropping at one point, to be carefully picked up again. This emphasizes that while the scenography is staged, the very event of the press conference seems to be a spontaneous set-up, unpracticed, and put together in a hurry only hours after the tragedy at the very site where it took place. The shock that is still evident in the men’s faces demonstrates the chaos and desperation of the situation, opening for a less dichotomic reading of the video’s rather ambiguous relation between the logic of the snapshot and scenography.

Figure: Screenshot of the press conference-video, rearranging positions.

A more dialectical understanding of the video as an expansion of snapshot logic helps us understand its potential. The video might on the one hand be understood as bearing witness to the atrocities happening in Gaza as an image of evidentiary value; at least, this might have been the intention behind the set-up. The shock effect evoked by presenting a ‘staged war’ did manage to hurl the video and screenshots of it into circulation online, to be seen by the world and bear witness to the attack on a civilian hospital. However, it can no longer be found in the archives of the news stations that broadcast it and is now mostly available as very short 1-minute clips on YouTube or X, obfuscated by blurring or pixelation. One might speculate on whether the blunt display of death (e.g., the dead baby with guts spilling out), was perceived as an overexposure of violence. This hyperbolic form in combination with the ‘overly’ choreographed frame of the press conference was perhaps simply too harsh and shocking to have a real effect. Currently, the video’s status is pending between being rejected as ‘propaganda’ and taking part in the overstimulation of hyperaesthesia, its sentiment getting partially lost in the abundance of images online, generating an anaesthetics toward the violence. In contrast to the extreme exposure of images of the crater, the absence of the video in the media landscape in the subsequent days and weeks testifies to this. Despite this ongoing uncertainty, we can say that the video provides a different kind of testimony to the desperation of the condition, one of penetrating desperation, giving form to a practice of resistance.

A Scenography of Resistance

Finally, I will read the video in light of Thomas Keenan, who abandons the dichotomous relationship between snapshot and scenography just discussed and suggests a different reception of images. Specifically, Keenan observes that “there are things which happen in front of cameras that are not simply true or false, not simply representations and references, but rather opportunities, events, performances, things that are done and done for the camera, which come into being in a space beyond truth and falsity that is created in view of mediation and transmission” (1). Following Keenan’s train of thought, I explore what might happen if we read the clash between frames and logics in the press conference video as a form through which the video can be (perceived as) both staged and evidentiary at the same time, suggesting a reading in which the element of scenography expands the snapshot logic and creates a space for sensing the war differently.

Returning to Rosler’s photomontages, we can see that the press conference video conceptually mimics the deframing and reframing of elements identified above. By cutting out photos of wounded bodies and pasting them onto a backdrop of American living rooms, Rosler sought to deframe the perception of the war in Vietnam. By uniting contradictory logical frames, the press conference video challenges the snapshot's axiom by expanding its boundaries and creating a space for expressing the desperation of the oppressed civil population in Gaza. In a Rosler-like manner, the video de-frames the images of the dead by removing them from the spot where they died and re-framing them into the curated performance of the press conference. Like the other elements—spotlights, stage, microphones—the dead bodies have been staged within the camera’s frame to make a statement of facts. As already argued, the staging takes place under obviously desperate circumstances, in which this aestheticization becomes a way to express the condition of despair. Corresponding to Georges Didi-Huberman’s description of “agonizing bodies,” the speakers at the press conference gesticulate a resistance that becomes evident when the frame expands and alters the perception: suddenly the speakers’ constrained grimaces do not correspond to the shock of seeing this surreal scenography. The expected scream that evades their mouths is replaced by clenched teeth (see images 8 and 9), a sense of despair pressing from within: “Fury makes men grind their teeth” (Didi-Huberman 14).

Figure 8: Details of facial expressions.
Figure 9: Details of facial expressions.

Viewed in a historical context, as an instance of the Palestinian resistance practice sumud, the meshing of logics and frames in the video appears less deliberately ‘staged’ and more as a spontaneous form of everyday resistance under extremely desperate circumstances. In Arabic, the word sumud means “steadfast perseverance” and as a term, it broadly covers a collective Palestinian nonviolent everyday resistance against Israel’s occupation (Interactive Encyclopedia of the Palestine Question). The exceptional conditions of violence and mainstream media obfuscation and restrictions, that have been observed since Oct 7th, 2023, stand out from the previous Israeli attacks on Gaza. Due to this, the everyday acts of resistance have new conditions, partially because the everyday is now a warzone, but also due to the restricted media coverage, that forces the victims to ‘broadcast their own destruction’ (The Editors). The doctors leading the press conference in their scrubs (perhaps the same ones they wore when the attack took place?) next to the crater outside the hospital building manifest the merging of warzone and everyday life. The video’s meshing of logics makes it possible to express the clash between the warzone and everyday life that the Palestinian population experiences. In this light, the scenographic arrangement of dead bodies could be interpreted as an act of everyday resistance taking place under desperate circumstances.

Sumud has been defined by a practice of ‘remaining’ or ‘enduring’ Israel’s occupation. By broadcasting an act of remaining in the rubbles – staying and enduring – as a form of sumud, the press conference connotes the historical emphasis on “remaining on Palestinian land or in the refugee camps despite hardship” (IEOTPQ). In this sense, the press conference as an act manifests the adapted gospel song “We shall not be moved”, originally used in the civil rights movement and recently chanted by student protesters against the genocide in Gaza all over the world. In this reading, the video can thus simultaneously be seen as a counterimage and an after- image: by striking back with and through after-images of ‘truth,’ it challenges the conventional approach to depicting images of war. The video shows the aftermath of the explosion after the screams for the lost children have died out, bodies gathered as evidence for the world to see the atrocities of Israel’s war in Gaza. Through this scenography of resistance, the video breaks with the established aestheticization of war and expands the axiomatic logic of today’s war images. The scenography of the video thus serves a similar purpose to that of Rosler’s: re-framing the narrative of the conflict by showing what Jacques Rancière (concerning Rosler’s photomontages) described as “the obvious reality that you do not want to see” (28). By displaying the casualties of the explosion, the video bears witness to the suffering and horrors of Israel’s war in Gaza. The staging thus becomes an evidentiary practice, inter-visually connected to historical practices of documenting the horrors of war, e.g. evident in the aftermath of WW2. In this way, the video contests the fetishizing of evidentiary images today, by revolting against the idea that images of war should be pixelated or spontaneous to support their credibility or authenticity. From the perspective of the repressed, this video can thus be read as resistance, or in Didi-Huberman’s words, “a gesture of despair before the atrocity that is unfolding below, a gesture calling for help in the direction of the eventual saviors outside of the frame and, above all, a gesture of tragic imprecation beyond—or through—every appeal to vengeance” (10–11).

The press conference video was staged, meant to be seen. The bodies lying still in the frame do not reverberate the immediacy of the explosion, but rather the desperate conditions of war that disrupt the everyday life of the Palestinian citizens. By not instantly disregarding the staging as propaganda or disaster pornography, within a consensus of the mediatization of war images that hinges on a dichotomy between the evidentiary and staging, but rather viewing it as an expansion of the snapshot logic, it becomes possible to interpret the video as an act of everyday resistance. The men sitting in the front are not in official clothes or scrubs and their primary role is to literally bear witness to the violence by physically holding the lifeless bodies of (their own?) children. Taking Donna Haraway’s “staying with the trouble” phrase to its extreme, the civilians in this video are ‘staying in the rubbles’ to bear witness in front of the world and express the conditions of despair and desperation in Gaza.

Works cited

Baudrillard, Jean. The Gulf War Did Not Take Place. Indiana University Press, 1995.

Buck-Morss, Susan. “Aesthetics and Anaesthetics: Walter Benjamin’s Artwork Essay Reconsidered.” October, vol. 62, 1992, pp. 3–41.

CasaInfo. YouTube video of the press conference. October 18, 2023, https://www.youtube.com/watch?v=ufixN2LxXt8&rco=1.

Chamayou, Grégoire. Drone theory. Penguin Books, 2015.

Didi-Huberman, Georges. “Conflicts of Gestures, Conflicts of Images.” The Nordic Journal of Aesthetics, no. 55/56, 2018, file:///Users/au457119/Downloads/lmichelsen,+Conflicts_GeorgesDidi-Huberman-1.pdf

Fuller, Matthew, and Eyal Weizman. Investigative Aesthetics: Conflicts and Commons in the Politics of Truth. Verso Books, 2021.

Ganguly, Manisha, Emma Graham-Harrison, Jason Burke, Elena Morresi, Ashley Kirk, and Lucy Swan. “Al-Ahli Arab Hospital: Piecing Together What Happened as Israel Insists Militant Rocket to Blame.” The Guardian, October 18, 2023, https://www.theguardian.com/world/2023/oct/18/al-ahli-arab-hospital-piecing-together-what- happened-as-israel-insists-militant-rocket-to-blame.

Hirschhorn, Thomas. Pixel-Collage. 2015, http://www.thomashirschhorn.com/pixel-collage/, 2024.

Interactive Encyclopedia of the Palestine Question. https://www.palquest.org/en/highlight/33633/sumud, 2024.

Keenan, Thomas. “Mobilizing Shame.” The South Atlantic Quarterly, vol. 103, no. 2, 2004, pp. 435–449, Project MUSE, muse.jhu.edu/article/169145.

Ranciére, Jacques. The Emancipated Spectator. Verso, 2009.

Shullenberger, Geoff. “Baudrillard in Gaza.” Compact Magazine, October 20, 2024, https://www.compactmag.com/article/baudrillard-in-gaza/

Sjöholm, Cecilia. “Images Do Not Take Sides: The Forensic Turn of Images.” The Nordic Journal of Aesthetics, vol. 30, no. 61–62, 2021, pp. 166–170, https://doi.org/10.7146/nja.v30i61- 62.127896.

Steyerl, Hito. “Documentary Uncertainty”. The Long Distance Runner, The Production Unit Archive, No. 72, 2007. http://www.kajsadahlberg.com/files/No_72_Documentary_Uncertainty_v2.pdf

Steyerl, Hito. “In Defence of the Poor Image”. e-flux journal # 10, November 2009, https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/

Biography

Marie Naja Lauritzen Dias is a Ph.D. Candidate at Aarhus University, School of Communication and Culture affiliated with the department for Art history, Aesthetics and Culture and Museology. Her research centers around war and digital images, the militarization of the everyday life as well as contemporary art and other aesthetic image practices. ORCID iD: https://orcid.org/0009-0001-0217-1167

Esther Rizo Casado

Xeno-Tuning

Xeno-Tuning

Dissolving Hegemonic Identities in Algorithmic Multiplicity

Abstract

Xenoimage Dataset is an artistic practice that unleashes the hallucinatory capacities of image-generating AI to question the perpetuation of power dynamics inherent in normative gender dichotomies. Employing techniques called xeno-tuning, it adapts pre-trained models to produce weird representations of corporealities, criticising the homogenous tendencies and biases inherent in image datasets. The purpose is to define the visuality of the xeno as a transformative agent of current hegemonic identities.

The production of images has a crucial role in prospective thinking since we need to picture the future to face it. The Palaeolithic Chauvet cave was found with evidence on how hunters first painted the beasts they were going to kill and even stuck arrows on them, as if killing their image would put an end to anticipatory uncertainty. This study is based on the practice of creating a collection of images generated by artificial intelligence (AI) representing fictional and weird bodies. These bodies are not of the beasts we prepare to hunt; they are a visual imaginary where the multiplicity of represented identities enable "the right of everyone to speak as no one in particular" (Laboria Cuboniks). As the critical designers Dunne and Raby convey in their manifesto on speculative design: "We rarely develop scenarios that suggest how things should be because it becomes too didactic and even more moralistic. For us futures are not a destination or something to be strived for but a medium to aid imaginative thought – to speculate with." (3)

Historians of robotics conclude that, over the years, automata have been built for three fundamental functions: work, sex, and entertainment (Mayor 58). However, one of the activities we most like to entrust to AI models nowadays is that of predicting the future. This is especially true in fields such as education, work, security, politics and the environment (Zafra). Following Parikka's words: "Inferred and envisioned, futures form as part of the production of images" (11), this study uses synthetic images representing abstract bodies as speculative experiments on fictional identities. The missing visual exploration of the "xeno" which considers the networked image as "a cooperation between the quasi-autonomous operations of software and remediated socio-cultural forms. [...] a relational object with performative agency (and as such, it can also move or exist beyond the computational)" (Cox et al 40).

Looking at image creation statistics from 2023, text-to-image algorithms have generated as many images as photographers have taken between 1876 and 1975, a period of 150 years (Valyaeva). When we look at how images are generated by machine learning models, we observe that the past, in the form of datasets, is the only thing that feeds these systems. It happens even when the machine is connected to the internet in real time. Furthermore, the visual preterit with which we feed these models based on natural language interface is linked to a text description; a tag created by humans enabling machines to understand our world. Even if the humans behind tags, called annotators, represented all different cultures, genders and contexts, it would be inevitable that this image-tag combination would be biased. As Land mentions in his text Collapse: "it is not a question of building the future, but of dismantling the past" (49). Therefore, when we use generative models in the conscious act of image production as speculative thinking: will we be able to envision other futures non-concerned with an unresponsive past? Can we keep image generating tools away from human conservative injustices, such as identity biases or omissions?

Fascinated by how understanding anatomy is one of the most complicated concepts for image generative models, this study began by exalting the aesthetic value of the current mutations in generated content. Six-fingered hands emerging from machine learning bugs and computer vision's apophenia helps to visualise concepts of the xeno, the foreign, the liminal emerging from software cracks. The weird is a simultaneous and inseparable pleasure and pain, since the obscure or negative is what enhances enjoyment. As Mark Fisher understands it, positivity is an illusion: the world is a dark place, and we must be willing to confront that obscure weirdness. This uncomfortable feeling can be unsettling, but it can also be a way of confronting us with the unknown and discovering a way of dealing with life. Therefore, the collective art piece Xenoimage Dataset, on which this study is based, provokes visual hallucinations with algorithms like next-frame prediction or transforming visualisations of the body like a colonoscopy (see Fig. 1). In this example, images become speculative objects, showing other kinds of intelligences: a fluid computer-generated data object, non-delimited by realism or anthropocentric perspectives.

Figure 1: Hand generated with online software Krea.ai. Source: XenoVisual Studies archive.

The resultant proto-dataset is not pretending to be functional by training other generative models on xenoimages. The archive is considered as feedback between conceptual thinking and subject matter. Its collection of images acts as a multi-author with the agency of generating software-based bodies representing soft-weird identities. The deforming and abstracting protocols used for exploring the unknown can be encompassed by the term 'xeno-tuning': a disruption from the fine-tuning technique. Instead of training each model from scratch, xeno-tuning utilises the capabilities of a pre-trained base model that has already acquired extensive learning and adapts it to a specific purpose: exploring unknown corporalities. Xenoimages represent bodies that are impossible to identify due to their fluid, asexual or non-anthropomorphic characteristics. Therefore, xeno-bodies which are sometimes non-identitarian and sometimes multi-identities. These liminal spaces help us think visually about what is a body, an identity, and every thing that constructs them.

The study of xenoimages, initiated by the artist Mar Osés, and continued during the participatory lab Sintient Media, in 2022, in the contemporary cultural centre Medialab Matadero, Madrid, created along the lines of Amy Ireland's Xenopoetics (155), leaving aside the importance of authorship by collective thinking processes. Since then, and following the premise that "virtual images are not objects that can be created all at once, but processes in progress" (Bottici 61), the search for xenoimages has become a long-term investigation developed by the collective XenoVisual Studies (XVS). In addition to testing different creative and experimental exercises, the collective organises exhibitions and knowledge transfer activities with citizens from Madrid and Buenos Aires, and cybernauts around the globe. In these gatherings different intelligences, especially those affected by identity biases, come together with the objective of generating consciousness about current oppressive algorithmic imaginaries.

The Homophilic Context

Authors have been pointing to a homogenisation of human identity through contexts like scientific eurocentric hegemony, bureaucratic categorisation, work automatisation and social alienation. Postmodern feminist Rosi Braidotti rejects unitary identities modelled on the humanist, normative, and Eurocentric context (52). Franco Berardi describes work automation not as something to do with software but as the replication of these entrenched intentionalities and established forms of human relations (113). Stuart Levine and Marshall McLuhan argue that technology forces us into the entanglement of thoughts, desires, and passions of our fellow human beings as an alienation process (646). Lastly, Remedios Zafra warns us that the times we live in favour categories that hinder a necessary detour from excess and acceleration (102).

The cybersphere that emerged in the 1990s was a text-based interaction where anonymity provided a safe space to experiment with components of human identity. The current democratisation of image-making and the dissemination of photographs and selfies through mobile phones and web 2.0 have become design tools for our digital selves. The real impact of these tools has been that companies are the decision-takers in identity matters. Images are detached from their representative or human functions (Farocki; Parikka and Gil-Fournier 11); even those created by humans, become data, therefore operational images, feeding algorithms. We have gone from being the creators and promoters of our self-image to leaving it in the hands of those who design algorithms. The main purpose of the companies that design algorithmic social communications is to feed our fear of not being socially accepted, based on a sense of belonging (Navarro). That is why society tends to favour homogenising imitation as a cultural generator and curator. The bulk of digital identity is nothing more than wanting to look like others (see Fig. 2).

Figure 2: People of the Twenty-First Century (Eijkelboom).

The Xeno Tendency

Xenofeminists believe in new rationalism, and by that they understand all kinds of intelligences (human and non-human) to be found in the xeno, the unknown, a force of speculating new worlds like anti-identitarian futures. Furthermore, this cyberfeminism contextualises the xeno in the world of technology, as new technological incorporations are often seen as entities that do not fit the organic or physical world, difficult to understand, and therefore weird.

The choice of the particle 'xeno' – by the theorists Laboria Cuboniks, and for this study – has to do with the ancient meanings of weird: 'fateful', from the Latin 'fatidicum', the root of which is 'fatum', meaning 'destiny'. Therefore, we can speculate that the sensation attached to predicting the future is of a weird nature: "what we need are people who can become stranger than the strange world we have produced" (Davis 13). The word 'xenomorph' is originally linked to science fiction novels that through imagination, linguistic description, illustration and finally audiovisuals investigate the possibilities of other identities that do not belong to the living organisms we have seen on earth. This study considers the xeno in terms of speculative fiction to imagine other possible identities, thus avoiding oppressions of some bodies over others. The difference between science fiction xenomorphic bodies and mathematical models generating images with the aesthetics of the xeno has to do, on the one hand, with the potential of these computational combinatorics and, on the other, with the hallucinatory tendency of generative models. What we call 'artificial hallucinations' is the way machine learning models acquire knowledge; it is not situated, synchronous learning like humans, and it is precisely this that makes it worthy of further exploration.

Xenofeminists have faced criticism for choosing the prefix 'xeno' (Goh), mostly attached to derogatory terms like xenophobia or the term alien which shows little sensitivity to the Marxian conflicting meaning of 'alienation' (dehumanisation of the human being, creating a superhuman, a God). Aware of this critique, the XVS collective emphasises the subversive attitude with which it confronts the algorithmic culture of the image, extending this approach to the language realm. They subject the terms xeno or alien to the same decolonial process applied to the production of images. Alien, therefore, refers to the transformative potential of weirdness and the radical inclusion of diversity to re-imagine a more equitable and inclusive society. After these semantic concerns, one of the xenofeminist collective, Lucca Frasser, describes the meaning of alienation as 'disrooting' (translated by Toni Navarro as 'desarraigo') which XVS identifies as a non-identitarian scenario that connects perfectly with its investigation. Therefore, xenophilic practices include displacing prevailing hegemonic norms, like the tendency for attaching xeno to phobia or alien to Marx. On the other hand, the incendiary usage of the term 'appropriation' in Annie Goh's article "Appropriating the Alien", could be argued to be in the same direction. Appropriationism generally occurs when a privileged group adopts aspects of the culture of a more vulnerable group, often without permission and in an exploitative manner.

Another value of xenofeminism has to do with its anti-naturalist approach which does not understand nature and technology as separate entities. Following Mark Fisher's theory about The Weird and the Eerie (58), in which the 'weird' tends to live away from anthropocentric and normative landscapes, and inhabits the world of liminality, with its continuous mutations. It cannot be described as pure human or non-human, female or male, inside or outside. A relevant example of this non-binarism in the weird is the uncanny valley: when anthropomorphic replicas, like robots, come too close to our appearance and behaviour, we cannot classify them as human or machine, and it causes a rejection response when we observe or interact with them. XVS takes this uncanny valley feeling as a strategy for exploring unexplored spaces.

The Slow Cancellation of Hardware-based Future

Accelerationists like Mark Fisher conjure that nowadays "culture has lost the ability to grasp and articulate the present. Or it could be that, in one very important sense, there is no present to grasp and articulate anymore" (54). Their stated reason is "a lack of imagination" in late capitalist society. Simon Reynolds, on the other hand, argues that what has been dying is a certain metallic-robotic aesthetic, and that futurism lives in a surprisingly organic way (Reynolds); not only organic in the sense that this term is usually given, but full of organs like (artificially intelligent) brains. We can confirm this aesthetic turn by comparing the Kraftwerkian futuristic style of the 1970s, metallic and wiring, hardware heavy and robot-centric, in contrast with the futurability that emerges from software-based examples like applying distorting voice techniques, auto-tune, to human voices (Reynolds 11-19), artificial filters to our faces, or the poetry in fictional theory: "She becomes the Body without Sex Organs: The body in a virtual state, ready to plug its desire into technocapital, becoming fused with technocapital as a molecular cyborg who is made flesh by the pharmaceutical-medical industry" (n1x).

Having placed the chosen aesthetic in this soft-organic context, I will explain the different techniques on which xenoimage production is based.

Artificial Imagination and Xeno-tuning

Image generation and algorithmic prediction can learn from past acts and speculate about the future, but if the algorithm does not identify a statistically matched visual option in its prediction, it will practise apophenia by finding meaning in random stimuli. Not only will it invent the next image, but it will also be able to see shapes in the noise, and can produce an infinite number of variations. Following the tactic that xenofeminism uses for gender abolitionism (Hester) – advocating for a plurality of gender identities that transcend binary and rigid categories – XVS uses multiplication of fictional identities as a fluid where hegemony goes unnoticed, getting lost in the immensity of the archive. Experiments with the representation of bodies in art need not follow laws. We can explore themes of identity and anti-identity through digital fluidity which allows us to test visual incognitas: at what point does a human body cease to look capitalised, binary, human?

The history of science fiction has turned us into spectators of attempts to imagine through the hybridisation and combination of humans with other corporealities. The mixture between two worlds has been a recurring resource when creating these imaginaries: the panther woman, the fly man, Kafka's Metamorphosis, the alien, cyborgs, and many more. In speculative image theory, these hybridisations have been superseded by more complex analogies that connect with the xenoimage nature, for instance: Jussi Parikka's insect media as an anti-species example that challenges our traditional views of the natural and the artificial; or Donna Haraway's critters as companion species that are not just pets, but rather co-evolutionary partners that shape our identities.

By the 2010s, the creation of music and images began to take on ever-stranger forms of beauty. If you look into the excessively auto-tuned voices in pop music or the hands generated by generative algorithms, you will find a strange aesthetic fluidity: "Identities will be corrupted. Believe in the mystery of digital DNA reconfiguration. Allow your bones to crush, let your brain grow out of your skull. It's time we accept that our physicality is just a mere fantasy." (DB) Xeno-tuning techniques specifically allow us to modify a deep learning model for generating images trained on a dataset of normative bodies and redesign them to generate other types of entities. To this end, we have found various forms of xeno-tuning which are described below.

As a first experiment, to try how algorithms are trained in words like weird, xeno, alien bodies or genderless, we used the text-to-image models like Vector-Quantized Generative Adversarial Networks, VQGAN (Simonyan), which uses a vector quantisation technique to learn a discrete representation of data. VQGAN was used together with CLIP (Radford), a neural network model that is trained on a massive dataset of text and image pairs (see Fig. 4). The most prolific aspect was that in these 2022 models, the less they understood the text prompt, the more they tended to create unfamiliar corporealities.

Figure 3: Text-to-Image xeno-tuning protocols. Source: author.

Experimenting with image-to-image models (see Fig. 4), we used StyleGAN2 (Karras) for inputting other images downloaded from internet, photographs we took of our bodies or generated by 3D software Blender (Roosendaal), together with the generative plugin Human Generator (Curtis) and modified with RunwayML (Germanidis), a distributed computing platform that allows users to run machine learning models at scale. We have modified the parameters of the algorithms thanks to the open-source cloud-based environment Jupyter Notebook (Kluyver), which allows anyone to write and execute arbitrary Python code in the browser.

Figure 4: Image-to-image xeno-tuning protocols. Source: author.

Inside the image-to-image techniques, some xenoimages were generated by mixing flowers and porn image archives. This practice has shown us how the mix of these two worlds is not as simple as a human body with a flower head, but a rhizomatic (Deleuze) union of two systems within a more complex one.

Figure 5: Xeno-tuning flowers from the dataset ImageNet mixed with genital images downloaded from the internet. Source: Xenoimage Dataset archive.

Another xeno-tuning technique was generating dozens of normative bodies with the 3D software Blender and its plugin Human Generator. These bodies were then transformed with the RunWay ML model, forcing the algorithm to generate something uncategorizable.

Figure 6: Image on the left was generated with the Human Generator plugin; the other two were transformed with RunWayML. Source: Xenoimage Dataset archive.

Nudity, foreshortening, absence of the face, and more irregular pictures of human bodies confuse trained computational vision. The artist Claudix Vanesix, part of the collective practice Xenoimage Dataset, prompted images of their body with these kinds of conditions: nudity, absence of face and foreshortening framing. As a result, the algorithm speculated all kinds of creatures far from anthropomorphic tendencies.

Figure 7: Xeno-tuning photographs of Claudix Vanesix body with image-to-image algorithm. Source: Xenoimage Dataset archive.

Asking ourselves questions like 'are faces the ones that bear the burden of defining identity?', we also used the extant algorithm This Person Doesn't Exist (Wang) to generate human faces and xeno-tune them with Style-GAN 2.

Figure 8: Faces generated with This Person does not Exist and modified with StyleGAN2. Resource: Xenoimage Dataset archive.

Computational Prevision

Until computer vision, the only way for machines to predict, for example, the weather was through numerical models and measurements. Since algorithms can see, they analyse the images they capture to learn from them. Once they learn, they create speculative images of what might happen in the future based on what has already happened. This gives rise to forms of machine creation such as next-frame prediction (Zhou), synthetic apophenia (Parikka), and nowcasting. Nowcasting techniques, widely used in foresight practices such as meteorology, demonstrate how the very act of capturing images of the real world and recording them leads to a calculable future: images give rise to other images. Next-frame prediction algorithms can learn from past acts and speculate on the future, but if the algorithm does not find a statistically matching visual option in its prediction, it will practise apophenia: sensing meaning in random stimuli (Steyerl). Not only will it invent the next frame, but it will also be able to see shapes in the noise (Nguyen).

Filmmaker and theorist Harun Farocki compares the extent to which humans and machines are both similar and dissimilar when it comes to the act of seeing and recognition (Jacarilla). He makes us think about the loss of the human ability to differentiate between real and fictional images (see Fig. 10). At the same time, he acknowledges the ability of machines to see using image recognition and processing software. Farocki's operational images do not represent a process but are part of one. They are often invisible to the human eye (see Fig. 9). Most of them are created by machines for machines (Paglen), and therefore, nowadays, for humans, "Not seeing anything intelligible is the new normal" (Steyerl).

Figure 9: Name One Thing in This NFT. Source: Melip0ne [@melip0ne], Twitter, April 23, 2019.
Figure 10: Images identified by a DNN as numbers 0 to 9. Source: Zhang et al, 894–922.

We see how photography, in its evolution, affected by emerging technologies, has already moved away from the values that have been historically attributed to it. It is abandoning romantic notions of authorship, originality, truth and beauty. The man-machine combination becomes concrete, according to Farocki, in the eye-machine combination when he analyses the functioning of intelligent machines and what they see when they work based on image recognition and processing software. Deep Neural Networks (DNNs) seek to replicate how the brain processes information, but in some cases they even see what humans cannot. These have been called 'deceptive images' (Baudrillard). The paradox sheds light on interesting differences between our vision and that of current DNNs, and questions the inherent hegemony in ways of seeing.

Figure 11 shows another process for image generation using video frames: Pix-to-Pix (Isola). This GAN works with two databases. The first is composed by the frames from a real-time video, which in this case is a colonoscopy. The second is made with images generated with the algorithm exercise of predicting how the next frame of the video will look.

Figure 11: Speculative video process. Source: author.
Figure 12: Frames of a colonoscopy video and the algorithmic prediction of the next frame. Source: Xenoimage Dataset archive.

Results

The co-creation of a dataset with 18,000 images serves as an example of questioning and disrupting the context of human identities through the lens of speculative feminism. We tested how image-generative algorithms can lead the universal imaginary to ambiguous spaces with transformative potential. This prototype is the refunctionalisation of image databases in xeno-keys through the elaboration of protocols that alter and stimulate visual imaginaries outside of current normative identities. Feeding and collapsing networked imaginaries with liminal, non-binary materials and thus forming realities detached from cultural identity conditioning factors, algorithms have been able to generate a multiplicity of xeno-tuned visions.

Beyond the algorithmic mutations and visual archive, there are two key outcomes of this practice. The first one is a manifesto written by the initial eight members of the collective. The text is a descriptive piece that has been taking shape as the dataset has been filled with xenoimages. The text focuses on the contextualisation of the term xenoimage and it is accompanied by a glossary:

The xenoimage is the other. In the face of the systemic reproduction of the same, these Xeno-images are established as a possible resistance from the visual. They are conceived together with the sensibilities of Artificial Intelligence. AI as a technology expands the possibilities of visual representation of bodies, understood as abolished identity. This condition is represented not as the eradication of the features considered cultural from among these bodies, but as their disarticulation as mechanisms of discrimination (Hester, 2018), and translates into an apophenic image that cannot be decoded with our current epistemes. The xenoimage seeks to conform itself as a database, in whose multiplicity an indeterminate number of possibilities of making bodies is represented, proposing visual spaces of the other in opposition to an algorithmic system that favours sameness, the hegemonic and the normative. The potentiality of the xeno-image lies in its capacity to generate questions in its emergence.

The xenoimage has a hyperstitional power as it can be defined as a self-fulfilling prophecy. Hyperstitions, by their existence as ideas, function causally to create their own reality, in this case, a reality detached from the cultural conditioning factors of identity. Thus, a radical transformation is proposed that follows the foundations of xenofeminism, anti-racism and anti-speciesism. Thus, it is worth highlighting the potential hyperstitious character of the xeno-image as conjectures that intersect with the plane of the real and that hover above our bodies and threaten to become reality.

The xenoimage is generated through an open, collaborative and communal process. This protomanifesto does not intend to generate a closed condition of the xenoimage, but rather, given its unfathomable dimension, it proposes a space from which to propose such a multiplicity of definitions that any attempt to categorise them is impracticable. We share our study in the hope that those who want to can think with the Xenoimages from their situated knowledge, in the same way that we at Xenoimage Dataset have reflected on identity issues. We would like to see this as a project that mutates, evolves and transforms.

Full text at www.xenovisualstudies.com

The second outcome is the presentation of the archive as an aesthetic analysis instead of a functional data collection. After searching for non-hegemonic taxonomy systems, we realised that a non-anthropocentric and therefore non-hegemonic categorisation could be one generated by an algorithm. For this reason, we have chosen a dimensionality reduction technique suitable for the random visualisation of large datasets called t-distributed stochastic neighbour or t-SNE (Yale DHLab). On the one hand, this system's behaviour is based on computer vision, and computer vision itself is trained by human annotations (image-tag), therefore it has inherited anthropocentric biases. On the other hand, the final visualisation is not deterministic because it is influenced by random elements distributed at different points in the space where they are displayed. This visualisation (see Fig. 13) is for us the correct one because it allows us to see how a machine would organise the different xeno-images. It speculates on the criteria happening at the black-box of the machine learning process.

Figure 13: T-SNE algorithm with xenoimages. Source: XenoImage Dataset, 10 January, 2024.

Imagining futures through mathematical models like generative AI could have only brought us probabilistic results, drawing averages and modes, and probably generic results. Therefore, at the beginning it seemed AI would not be ideal for observing human diversity and multiplicity. However, using the proposed xeno-tuning techniques, we affirm that human-machine multiplicity can become a space for exploring new identities through abstract images of non-hegemonic bodies. Thinking xenofutures together with the sensibilities of generative AI has become a practice of resistance to the existing identity sameness. These cyborg fictional speculations allow the transgression from a systemic reproduction of the same to the fluid endless search of the weird. We can affirm that AI especially expands possibilities for the visual representation of bodies.

After this speculation, the collective realised that exploring weird identities has to be a multicultural, diverse, expanded practice, therefore the dataset is open online to anyone who wants to contribute. However, the objective is not the creation of more visual or functional data, therefore the collective facilitates theoretical and practical knowledge through face-to-face meetings and online platforms in its quest to expand the most transcendental character of this practice. The flow of generating operational xenoimages and feeding organic and algorithmic imaginaries keeps developing through the cyberfeminist collective XVS, which is financially supported through the Spanish Ministry of Equality, the cultural institution Medialab Matadero, and the Equality Unit at Universidad Complutense de Madrid.

Works cited

Baudrillard, Jean. Simulacra and Simulation. Translated by Sheila Faria Glaser, University of Michigan Press, 1994.

Berardi, Franco 'Bifo'. Futurabilidad. La Era de la Impotencia y el Horizonte de la Imposibilidad [Futurability. The Age of Impotence and the Horizon of Possibility]. Caja Negra, 2019.

Bottici, Chiara, and Zeta Books. "From the Imagination to the Imaginal Politics, Spectacle and Post-Fordist Capitalism." Social Imaginaries, vol. 3, no. 1, Philosophy Documentation Center, 2017, pp. 61-81. https://doi.org10.5840/si2017314.

Braidotti, Rossi. Posthumanismo [The Posthuman]. Editorial GEDISA, 2015.

Cox, Geoff, Annet Dekker, Andrew Dewdney, Katrina Sluis. "Affordances of the Networked Image." The Nordic Journal of Aesthetics, vol. 30 no. 61-62 (2021), pp. 40-45. https://doi.org/10.7146/nja.v30i61-62.127857.

Curtis, Holt. Human Generator. Human Generator 3D, 2021.

Davis, Erik. Tecgnosis [Techgnosis]. Caja Negra, 2023.

DB. H. Felina, [@0keyth/], "Transmutation." Instagram, 16 April, 2024, https://www.instagram.com/p/C51BaegKtJK/?img_index=1. Accessed 1 May, 2024.

Deleuze, Gilles, and Félix Guattari. Rizoma: Introducción [Rhizome: Introduction]. Pretextos, 1977.

Dunne, Anthony, and Fiona Raby. Speculative Everything: Design, Fiction, and Social Dreaming. The MIT Press, 2013.

Eijkelboom, Hans. People of the Twenty-First Century. Phaidon Press, 2014.

Farocki, Harun. "Eye / Machine III." MACBA Museo De Arte Contemporáneo De Barcelona, 2003. www.macba.cat/es/arte-artistas/artistas/farocki-harun/eye-machine-iii. Accessed 5 Jan, 2024.

Fisher, Mark. Ghosts of My Life: Writings on Depression, Hauntology and Lost Futures. John Hunt Publishing, 2014.

Fisher, Mark. Lo raro y lo Espeluznante [The Weird and the Eerie]. Caja Negra, 2018.

Germanidis, A. RunwayML. RunwayML, 2018.

Goh, Annie. "Appropriating the Alien: A critique of xenofeminism." Mute, July 2019. https://www.metamute.org/editorial/articles/appropriating-alien-critique-xenofeminism.

Konior, Bogna M. "Alien Aesthetics: Xenofeminism and Nonhuman Animals." ISEA 22nd International Symposium on Electronic Art, Hong Kong, 2016, pp. 88-92.

Haraway, Donna. Seguir con el Problema [Staying with the Trouble]. Consonn, 2020.

Hester, Helen. Xenofeminismo. Tecnologías de Género y Políticas de Reproducción [Xenofeminism. Gender Technologies and Reproductive Politics]. Caja Negra, 2018.

Ireland, Amy. Xenopoética [Xenopoethic]; "Nuevos Vectores del Xenofeminismo" [New Vectors of Xenofeminism], Laboria Cuboniks, translated and edited by Toni Navarro and Federico Fernández Giordano. Ediciones Holobionte, 2022, pp. 155-159.

Isola, Phillip, et al. "Image-to-Image Translation With Conditional Adversarial Networks." Cornell University arXiv, Jan 2016. https://doi.org/10.48550/arxiv.1611.07004.

Jacarilla, Marla. "Harun Farocki. Imágenes Operativas y Creación de Campos de Batalla" [Harun Farocki. Operational Imagery And Battlefield Creation], A*Desk, July 2023, www.a-desk.org/magazine/harun-farocki-imagenes-operativas.

Karras, Tero, et al. StyleGAN (Version 2). Nvidia, 2020.

Kluyver, Thomas, et al. Jupyter Notebooks – A Publishing Format for Reproducible Computational Workflows. Positioning and Power in Academic Publishing: Players, Agents and Agendas, edited by F. Loizides and B. Schmidt, Proceedings of the 20th Inyternational Conference on Electronic Publishing, 2016. https://www.iospress.com/catalog/books/positioning-and-power-in-academic-publishing-players-agents-and-agendas.

Laboria Cuboniks. "Xenofeminism: A Politics for Alienation: Laboria Cuboniks." A Politics for Alienation, 0x04, 2015, www.laboriacuboniks.net/manifesto/xenofeminism-a-politics-for-alienation/. Accessed 4 July 2024.

Land, Nick. Colapso [Collapse]. Aceleracionismo. Estrategias para una transición hacia el postcapitalismo. [Accelerationism. Strategies for a transition to post-capitalism], edited by Armen Avanessian and Mauro Reis. Caja Negra, 2017, pp. 49-65.

Levine, Stuart, and Marshall McLuhan. "Understanding Media: The Extensions of Man." American Quarterly, vol. 16, no. 4, Jan 1964. https://doi.org/10.2307/2711172.

Mayor, Adrienne. Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. Princeton University Press, 2018.

Nguyen, Anh, et al. "Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images." In Computer Vision and Pattern Recognition (CVPR '15), IEEE, 2015. arXiv.org, 5 Dec. 2014. arxiv.org/abs/1412.1897.

Navarro, Toni. "Prologo" [Prologue]. "Nuevos Vectores del Xenofeminismo" [New Vectors of Xenofeminism]. Laboria Cuboniks' texts translated and edited by Toni Navarro and Federico Fernández Giordano. Ediciones Holobionte, 2022, pp. 9-22.

Navarro, Toni. "Xenofilia Algorítmica" [Algorithmic Xenophilia]. Soundcloud Audio, XenoVisual Studies Center, Medialab Matadero, https://soundcloud.com/xenovisual-studies/xenofilia-algoritmica-por-toni-navarro. Accessed 14th February, 2024

n1x. "Gender Acceleration: A Blackpaper." The Anarchist Library, theanarchistlibrary.org/library/n1x-gender-acceleration-a-blackpaper. Accessed 24 Apr. 2024.

Paglen, Trevor. "Operational Machines". E-flux Journal, no. 59, November 2014. www.e-flux.com/journal/59/61130/operational-images. Accessed 1 June 2024.

Parikka, Jussi, and Abelardo Gil-Fournier. "'Visual Hallucination of Probable Events', or, on Environments of Images and Machine Learning." MediArXiv Preprints, Aug 2019, https://doi.org/10.33767/osf.io/wx98s.

Radford, Alec, et al. CLIP. OpenAI, 2021.

Reynolds, Simon. "Prólogo [Prologue]." Gritos de Neon. Cómo el Drill, el Trap y el Bashment hicieron que la música sea novedosa otra vez [Neon Screams. How Drill, Trap and Bashment made music new again], edited by Kit Mackintosh, translated by Micaella Ortelli, Caja Negra, 2022, pp. 11-19.

Roosendaal, Ton. Blender 3D. NeoGeo, 1994.

Simonyan, Karen, A. N. Gomez, and A. Vedaldi. VQGAN. Google, 2021.

Steyerl, Hito. "A Sea of Data: Apophenia and Pattern (Mis-)Recognition." E-Flux Journal, no. 72, 2016. www.e-flux.com/journal/72/60480/a-sea-of-data-apophenia-and-pattern-mis-recognition. Accessed 24 Apr. 2024.

Yale DHLab. "pix-plot." GitHub, Yale DHLab, https://github.com/YaleDHLab/pix-plot.

Valyaeva, Alina. "AI Image Statistics: How Much Content Was Created by AI". Everypixel Journal - Your Guide to the Entangled World of AI, 15 Aug. 2023, journal.everypixel.com/ai-image-statistics.

Wang, Philip. This Person Doesn't Exist. 2014. thispersondoesnotexist.com.

Xenoimage Dataset Collective. "Xenoimage Dataset Presentation" YouTube, uploaded by Medialab Matadero, 6 July 2022, www.youtube.com/watch?v=3pzMsNSttBY.

XenoVisual Studies, 2024, www.xenovisualstudies.com. Accessed 1 May. 2024.

Zafra, Remedios. El Bucle Invisible [The invisible loop]. Oviedo. Ediciones Nobel, 2022.

Zhang, Xu-Yao, Cheng-Lin Liu, Ching Y. Suen. "Towards Robust Pattern Recognition: a review." Proceedings of the IEEE, 108(6) (2020), pp. 894–922. https://arxiv.org/pdf/2006.06976.

Zhou, Yufan, Haiwei Dong, Abdulmotaleb El Saddik. "Deep Learning in Next-Frame Prediction: A Benchmark Review." IEEE Access, vol. 8, Jan 2020, pp. 69273–83. https://doi.org/10.1109/access.2020.2987281.

Biography

Esther Rizo-Casado is a lecturer at ESIC University in Madrid and a researcher at the Complutense University of Madrid (UCM). In addition to her academic roles, she is an accomplished artist and an active member of the XenoVisual Studies collective. With a deep commitment to advancing the integration of women in art and technology, she fosters community engagement through collaborative interdisciplinary projects, supported by the Spanish Ministry of Equality and the UCM Equality Unit (2023-24). Rizo-Casado is the author of a seminal book on life-centered design (2021) and has produced speculative writings that investigate into the intersections of xenofeminism and emerging visual technologies. ORCID: https://orcid.org/0000-0003-0386-6048

Mateus Domingos

Unstable Frequencies

Unstable Frequencies

A Case for Small-scale Wifi Experimentation

Abstract

In this essay I describe an experimental wifi network sideBand which provided access to a simple message-board during the Content/Form workshop, held at Haus der Kulturen der Welt in January 2024, as part of transmediale. This is considered in relation to the collaborative infrastructure ServPub that was carefully assembled and maintained for the workshop. The production of the hardware and software required for the sideBand network is described with specific consideration of the types of memory and data storage systems that are utilised. The programming that provided message-board (and wider) functionality is also described. The conditions of this production along with the specific ways in which use of the network unfolds are considered in relation to Dunbar-Hester’s propagation and a Luddite framing, that includes sabotage and refusal. This is argued as producing other ways of understanding infrastructure and place, related to the feminist methodologies of ServPub.

Introduction

The Content/Form research workshop questioned of the ways in which research practices are "shared and reviewed, and the infrastructures through which [they are] served." Participants gained access to the specially convened ServPub infrastructure, engaging with the self-managed server and the various software and memory systems provided.[1] Inviting space for a parallel imagination of these networks, I used wifi-enabled micro-controllers to introduce offline access points, serving a simple message board (referred to hereafter as sideBand). I argue that this intervention provokes a counter-knowledge of the internet and networking infrastructures. My intention was for sideBand to act as a shadow network to the primary shared space of ServPub, augmenting our interactions with the server and the collective writing processes.

Networked data streams always function through protocological layers which perform different functions that in combination allow communication or transmission. They do so within a chain of binary data operating along a one-dimensional line. As Alexander Galloway describes:

At each phase shift (i.e., the shift from HTML to HTTP, or from HTTP to TCP), one is able to identify a data object from the intersection of two articulated protocols. In fact, since digital information is nothing but an un-differentiated soup of ones and zeros, data objects are nothing but the arbitrary drawing of boundaries that appear at the threshold of two articulated protocols. (Galloway 52)

This nesting of protocols extends to the transport medium, such as "fiber-optic cables, telephone lines, air waves, etc." (Galloway 11) as well as bodies.

As Deleuze shows in the 'Postscript on Control Societies,' protocological control also affects the functioning of bodies within social space and the creation of these bodies into forms of ‘artificial life’ that are dividuated, sampled, and coded. (Galloway 12)

The ServPub infrastructure and feminist technological practices produce a visibility of the nodes of networking infrastructure. This is evidenced particularly in the visual diagramming and open documentation of installation and maintenance processes that accompany the group’s publications, talks and interventions. The sideBand microcontrollers pursue a further visibility through smaller discrete components and selective use of existing infrastructures. The sideband infrastructure is more brittle than the wider network, because in the spirit of permacomputing and visibility, technical capability is traded for limited parameters that surface protocological moments into active decisions. Using this limited reduced component architecture, narrows the scope of this inquiry, as a strategy for gaining some specificity of the various material entanglements. For instance, it becomes somewhat more trivial for someone without formal technical knowledge of computer hardware to learn where the memory is inscribed and under what levels of permanence, and also begin to follow the combination of manufacturers at least partially involved in the assembly of this microcontroller.

Following the naming conventions of the newspaper and wiki4print (Berends, Browne), I’ll use the word editors to refer to users of the wiki4print platform, which includes the workshop participants, organisers and caretakers of the infrastructure.

Mapping

The ServPub infrastructure has been documented in depth by its participants in the newspaper, online, and within this journal. Based on the diagram by Mara Karagianni, I include a brief outline of the structure here in order to contextualise the observations I am drawing. This diagram also shows how the sideBand network occupies space alongside the ServPub network.

Figure 1: Network diagram (based on Karagianni).

The diagram maps the modes of connection to the sideBand network, the ServPub organised wiki4print platform and common retrieval of websites. The editor is imagined as selecting a wifi network or service set identifier (SSID) to join at which point the paths of data diverge. In the organisation of this diagram, blocks closer to the top of the diagram are more likely visible to the editors. This visibility mostly aligns with opportunity for control over those blocks (e.g. through removing the power source).

Our interaction and use of both the sideBand message board and the ServPub wiki as editors are performed within a web browser. With ServPub this included complex software such as Etherpad and the MediaWiki (Crocker, Manske) based wiki4print, described as "not only a repository tool for print, but as something that contributes to a larger whole and network of voices." (Content/Form 1) There is a distinction in the way the platforms are maintained, or that adjustments can be made to their configuration and programming. Within ServPub this will often include the collaborative programming that is done via SSH[2] sessions and the shared command line software tmux (Marriott). The functioning of the device can be rewritten as needed potentially whilst services remain online.

In the sideBand network the access points are limited to the programmed configuration and whatever other affordances may be found in the conventional processes of port forwarding. Re-programming of the sideBand access points is possible (and was encouraged) but would force disconnection and erasure of any active connections.

A key distinction, which situates the following essay, is contained within the first step from editor device to the network. The ServPub setup was joined via the wifi provided by the workshop venue, and as such, not controlled by the participants. As the network included multiple servers in different locations, the data was making long international loops to reach back to the device that was in the room. Conversely the sideBand network was contained within the room, with data passing directly between the sideBand device and the editor device.

Side Bands

This kind of networking performed by the sideBand, could perhaps be described as shadow-networking, borrowing the nomenclature from shadow libraries and the dark web. These practices challenge the permitted use of existing infrastructures. Their operation follows pirate principles (Sumi 13). The uses of sideBand that editors took most interest in was perhaps those that performed as elements of a shadow library structure. I am using the term sideBands to refer more specifically to a technical phenomenon, that relates to the relational quality of the networks.

Side bands are the range of radio frequencies either side of the carrier wave modulated by the transmitted signal. This transmission is mirrored as an upper side band and a lower side band. This occurs within the transmission of both analogue and digital information. Transmitting either one of these frequency bands can allow the signal to be reconstructed by a receiver. Protocols have been developed to exploit this artefact of modulation, and their continued development is of continued interest within amateur radio communities. Within the limited bandwidths available, techniques that produce signals at narrower bandwidth have always been important and novel developments are often made by tinkering amateurs.

Rhyming with the side band phenomenon, the kind of networking explored with the sideBand exists within the affordances of protocol that collectively assemble into a functioning internet. A selective use and adaptation of parts of this framework generate a different system of transmission. It directly utilises a narrowing of frequencies and the ability to re-assemble transmissions. This occurs through the selective use of wifi channels and the local, social structure, or bodies, that are present with the devices.

It is in some respects an inversion of the ServPub model. Instead of creating a communal, distributed infrastructure, a looser proliferation of isolated networks is expected. This inherently limits the scope of the activity whilst surfacing a visibility of components and network paths. The hardware is built from comparatively few components, most of which are clearly visible. This affords an awareness of processes that relates to Christian Ulrik Andersen and Søren Bro Pold's framing of technology critique that combine "semantic processes of signification with machinic processes of signals." (14)

There is also a link here to radio activism, as described by Christina Dunbar-Hester in her account of low power FM radio activism and the Prometheus Radio Project. Dunbar-Hester argues that these radio activists are "'propagators’ of technology."

I use the term propagation specifically to refer to the intertwined practices of discursive and material engagement with an artifact. Propagators shape the form, meaning, and use of a given artifact through argumentation and mediating work to audiences such as users or regulators." (Dunbar-Hester 23)

Propagation, as the phrasing implies a transmission of knowledge, its reception and ongoing transmission. Crucially this includes practical engagements with an artifact that might not produce an improvement or objective change in the functioning of that artifact. Instead, it is the political and social relationships that are felt as important in the act of propagation. Dunbar-Hester describes this through the example of a weekend with activists spent repairing a decommissioned radio transmitter, that was undertaken without a clear use for the transmitter identified and did not fully complete the repairs (69). Much of Dunbar-Hester’s study reflects on the Prometheus organisation’s barnraising events. These large gatherings were the culmination of extensive processes of planning, licence application and fundraising that performed the physical construction of a low powered radio station. The technical processes involved in the construction of the station were often relatively trivial to experienced technicians, however it was important to the activists that the barnraising proceeded in a way facilitated learning, in combination with the logistical and care work that was also essential to the event (20).

My use of the specific sideBand micro-controllers stems from their apparent potential to fulfil this kind of propagation entanglement: their functionality offers a low-code entry into the programming and exploration of networking devices. Given their inherent limitation, they contain a surprising amount of flexibility, leading to interesting experimental use cases. The sideBand nodes suggest an expanded introduction to these techniques.

Following the imagined propagation of these isolated networks, comparisons can be made to the historical resistance of the Luddites. If the aim of these engagements with networking infrastructure include the possibilities of changing them, reorienting them towards more equitable ends, then the template of hardware destruction that the Luddite resistance performed can be instructive.

Luddism in its original guise was resistant to changes rendered in the workplace, often embodied in the deployment of machinery that automated increasing stages of production processes. Its methods of refusal and sabotage attacked these machines, and it is commonplace to articulate a Luddite stance in regard to contemporary technologies, or as certain effects of their use become apparent. Despite its eventual suppression and failures, Luddism suggests techniques for organisation and collective action. The methods of resistance employed and reasons for it were not unique to Luddism, but rather appear as part of a tapestry of woven solidarities and movements that emerged from the specific conditions imposed on the population (Thompson).

Dunbar-Hester is careful to distinguish propagators from 'mere' Luddites (Dunbar-Hester 181). The radio activism she describes emerges more through uses of specific technologies and appeals to legal frameworks than a retreat from legal options and a destruction of machinery. I don’t quite draw such a distinction as the Luddite activities did include attempts at gaining certain protections through charters and the halting destruction of machinery could reflect a radical flank of a wider movement agitating against the imposed conditions (Malm 50).

The sideBand provocation was suggestive of some of the Luddite strategies, if not also performing them in some minor ways. Including, forms of sabotage and refusal. Sabotage – or the withdrawal of efficiency (Flynn), is enacted through occupation of the network spaces with brittle replacements (sideBand micro-controllers). Refusal is understood through the call to "put down all machinery harmful to commonality" (Binfield 57). Recognising implicit harms to commonality in conventional computational technologies and networking infrastructure, the sideBand escapes (or limits exposure to) some aspects of the conventional networking infrastructure (which is essential to larger scale networking) as does Servpub.

Through the limited sideBand nodes, there is a potential that these processes of sabotage and refusal can be negotiated and performed, if only as a limited moment of reflection and critique within the small scale of the workshop. As networking complexity increases, the malleability of the systems for novice programmers is diminished. This isn't intended to argue for non-complexity in the infrastructures we attempt to build as communities. It is instead about a parallel point of access that supports propagation and provides waypoints to complexity.

Components

The ServPub network operated across a range of machines. Centrally it was using multiple Raspberry Pi's. In the same way that all commercially available computer systems should be challenged regarding their ecological impacts and ethical implications (Arboleda; Kara; Starosielski), the Raspberry Pi can be seen as a troubling device for communities engaged with anti-colonial and feminist protest.

Despite the apparently democratising potential of Raspberry Pi at its outset, the egalitarian potential has not been fulfilled. Although it remains relatively cheap and benefits from established documentation and experimentation; a planned Initial Public Offering (Halfacree), fraught social media engagement (Shaw), and a frequent lack of availability to hobbyists as stock is bought up by industrial firms (Upton), all introduce questions about the suitability of the system. The sideBand uses devices that are no less valid as targets for scrutiny and questioning as to their suitability. They exist however, within a different set of conditions that seem to both reveal and hide various elements of their manufacture to different degrees.

The ESP8266 is produced by Chinese semiconductor firm, Espressif. They describe themselves as a world leading AIoT (Artificial Intelligence of Things) company and by late 2023 they had shipped over 1 billion chips (Espressif). They are well known for their ESP range of chips which feature networking components such as wifi. Their devices are designed to work with Espressif's own open-source SDKs as well as other alternatives, such as Arduino. As with Raspberry Pi, they are designed primarily for industrial use. The ESP chips have also been developed into so-called development boards. This places the chip within a printed circuit board that contains the necessary components to enable simple programming and operation, allowing inputs/outputs and USB connection. The model I am using is the D1 Mini, originally designed by Lolin (based on a module by Ai-Thinker). As they became available in these hobbyist-friendly forms they were quickly adopted as a flexible part of IoT (Internet of Things) tinkering, despite initially having no English documentation (Benchoff).

Figure 2: Diagram of some manufacturers at different stages of the development board production.

The D1 Mini contains a handful of visible components. The antenna is a copper trace etched onto the board like a strange glyph. The intended use of the board is signed and performed by this as the length of the trace corresponds to antenna theory and is (with an adjustment for impedance) a quarter wavelength long (Pattnayak, Thanikachalam 4). Because the hardware is open sourced it allows hobbyists to experiment with the construction of these boards. This structure means that it is possible for hobbyists to design their own versions of these development boards and have them manufactured with relative ease (Feranec). This means that there are also many different companies producing these development boards, and that they continue to be readily available despite their limitations and superior later models.

Figure 3: ESP8266 Block diagram (based on Espressif).

To program the board I have been using Arduino, "an open-source electronics platform based on easy-to-use hardware and software." ("What Is Arduino?") It is a popular choice for hobbyists to experiment with electronics, often utilising sensors and other components. The integrated development environment (IDE) shares a lineage with experimental artistic coding as it built on Wiring (Barragán) which the creators of Processing, Casey Reas and Ben Fry describe as "an electronics version of Processing that used our programming environment and was patterned after the Processing syntax" (Shiffman). It's relevant to note as well, that Reas and Fry also express their excitement at the "iteration and growth of this community" and cite the earlier software and projects that informed Processing.

Message Board

The sideBand locally hosts a simple message board website. It is a small page of HTML that assembles itself each time it is requested, to include any new data submitted by the editors. It makes use of a handful of basic HTML elements with styling performed inline. The more complex element is the form, which provides the functionality for editors to post messages. The messages are stored as variables that pass through the Arduino code and are able to act in parts of the program other than the displayed HTML. Despite being messy and inefficient, the code for this program is quite accessible for editors with little coding experience. Much of the functionality of the sideBand server is contained and performed within libraries that are part of the IDE. In the process of programming, we can call on these libraries and use certain functions with the expectation that they will continue to perform expected tasks (such as handle new editors joining the SSID).


Figures 4, 5, 6
 
 

Foregoing some of the checks that might be made as standard practice if deploying a form online primarily to avoid code injection attacks, minimal checks are here performed on the validity of submitted data. This introduces an unstable zone, where editors have increased agency over the functionality of the message board. They could, for example, submit HTML markup to alter the display of the posts. This is also a vector through which the editors could disable the board if they chose to. This brittle state can be both useful and limiting.

Memories

When using computers, we are interacting with machines that manipulate data and instructions following established architectures, with differing affordances. These machines are usually based on the Von Neumann architecture, in which the "data and instructions are both stored in primary storage" (BBC). An alternative to this is the Harvard architecture, meaning that data and instructions are stored separately. This architecture remains common in microcontrollers where the limited instruction and data memory make it suitable (Fouilloux). This provides an interesting point of difference in the fundamental interaction with these devices and general processes of computation.

The sideBand program stores variables in static random-access memory (SRAM). This is a volatile method of data storage, which means that when it loses power the data is lost. Conversely, flash memory is used to store the program information. This is a non-volatile method of data storage, with permanence, whether or not the device is powered. However, it is still vulnerable to damage, and the processes of re-writing or re-flashing the memory will cause failure after a number of cycles (often estimated at 10000). This is because, with each re-write there is a physical degradation of the oxide layer which prevents the controlled manipulation of the electrons (Bez et al; Sheldon, Silwa). Flash memory itself is organised along principles that are familiar to modes of publishing. It is arranged physically as stacks of pages, laid out in grids that look like the galleys of a book. This works metaphorically through diagrammed understanding of the board (fig. 7) and as a potential act of reading. Given this material configuration it would actually be possible to read the code visually using advanced microscopy techniques (Courbon, Skorobogatov, Woods).[3]

Figure 7: Flash Memory diagram (based on AnandTech).

Accessing the sideBand network would in most cases force a user to disconnect from other networks since the usual operation of phones or laptops only allows for a connection to one wifi network at a time to avoid routing conflicts. On some devices this process prompts a warning to the editor that they will not be able to connect to the internet. In the process of connecting to a new network some devices will try to ping a service, in order to receive a response confirming they are online. The response generated when connected to the sideBand network is unsuccessful and so triggers the warning. The device presents this as something you shouldn't do and circumventing it usually takes a few more steps to assure the device that you do wish to connect to this offline network. This hints at the levels of invisible communication our devices have with centralised platforms throughout their use.

If the sideBand microcontroller is reset or encounters an error, the data posted to the message log is not saved. Drawing on Eugene Thacker's assertion that "the moment of disconnectivity is the moment when protocol most forcefully displays its political character" (Thacker xvi) we can identify this as a crucial part in our understanding of the political possibilities that the sideBand represents. The use of the network is collectively understood to be temporary and subject to either chance failure or forced failure (as through code injection, or switching off the power supply). This necessarily shapes what a community might decide to share there or not. Methods of saving the log could also be enacted if the editors wished. Once the page has loaded to their device they could save the page locally or make screenshots at any time.

Increasing usage of the sideBand also increases the likelihood of it failing. The code was written in such a way that it would soon reach the limits of its memory and require resetting. This was indicated in the website interface with a counter that showed current available memory. The sideBand is brittle and limited. For any sustained use it requires a careful, deliberate kind of engagement; with an expanded possibility to slow, pause or preserve the data that emerges.

Admission

The use of the sideBand starts from a collective expectation of its limits as described above. It is important that they were introduced to the group in this way to be clear in the way that the network was lacking features that might be expected. Admission to the space requires knowledge to be shared between peers. In this sharing of the password the group of editors becomes possible. Bypassing a usual layer of internet browsing, the message board website is reached at an IP address instead of a domain name.

Beyond the option of removing the sideBand power supply there are no moderation tools or policies enforced by the software. The acceptable behaviours of a group of people working together, with computers that might usually be ceded to established hardware and software limitations or a coded range of possibilities remain negotiable using other channels within the room, such as voice, a handwritten note, or list of rules taped to a wall.

This process of admission and group formation echoes the use of oaths and passwords within many organisations, covert or not. The Luddites, for example, called the initiation of the oath, "the tying in". It was deemed at that time a transgressive act, and punishable by execution or transportation (Ludd). These processes of course rely on the bonds of trust and honesty between the participants, and produce an interdependence For the Luddites this was frequently undermined by the infiltration of government spies. The uses of the sideBand described here do not necessarily produce outcomes that require careful sensitivity as to the process of admission. However, it does form a space that is potentially subject to legal ramifications both in the content posted to it, and the locations of its deployment. An example of this could have been political organising, particularly around calls for "boycotting, divesting and sanctioning The Cloud Regime" (Counter Cloud Action Day), which were present at the festival and subject to differing levels of censorship (Anti-Colonial Tech Panel), along with the anti-extremism protest that took place metres from the festival venue (Rinke and Steitz).

Interference

Within the room these sideBand microcontrollers push up against, make fuzzy, and interrupt conventional networking. Wifi routers have a series of 20MHz channels available across which they can make connections. These can usually be configured to work within a different sector within the range, if needed, to avoid interference.

During the workshop, with many devices in the room there was no noticeable disruption caused by the sideBand nodes and channel switching or crowding wasn't explored. In programming the sideBand nodes it was necessary to make active decisions and predictions about the potential outcomes, through the channel settings. This draws attention to the possibility of interference that occurs in the physical operation of the devices. Qualities of the network usually assumed to be ephemeral are brought into question and consideration.

A more noticeable interference caused by sideBand was the momentary movement of collective writing from the Etherpad to the sideBand message board. The Etherpad setup was already providing an engaging place for the collaborative note-taking of our individual presentations to develop. This kind of writing often develops its own etiquette and styles based on the participation of the group. To investigate the message board, the Etherpad space had to be temporarily left, leaving a gap in the recorded notes.

This activity of wifi misuse recalls the practices of pirate radio which operates against the legal conditions imposed on the electromagnetic spectrum often broadcast within the same frequencies assigned to other stations. Lower powered, temporary and sometimes portable equipment, means that signal propagation will often be relatively local. The stations can be received by a simple consumer radio receiver. Wifi access points may similarly broadcast to a local area, indiscriminately visible to wifi enabled devices. The devices needed to join are (like the radio receivers) generic and designed to operate with these protocols.

Carrier Waves

If part of our exploration of the ServPub server is understood in relation to conventional methods digital networking, how does it react to the smaller unconventional methods of the nodes. In practice the sideBand nodes produce a friction. To connect to the message board or run other services, means disconnecting from the collective spaces of writing on the ServPub server, and the wider internet. It is a hardware switching that interrupts flow.

The convivial group editing of the wiki, kept the process generously open up to the point when the files needed for printing the newspaper iteration of the journal, the key outcome from the workshop, were generated in the form of a pdf. This is an iterative encapsulation that can be run multiple times. The point is that it does reach a stasis, something objective, that allows it to travel outside of the workshop - to produce research. Counter to this the sideBand produces a timeline of comments, shared within a small range, only until the moment of disconnection.

Dunbar-Hester describes the propagators of Prometheus Radio Project considering the use of wifi and webcasting. Whilst the use of webcasting technologies was deemed mostly not as providing “an equivalent alternative to FM” (164), the possibility of community wifi networks was pursued. This included events similar to the construction of low powered radio stations, but towards the construction of community owned wifi networks. Essential to this process, and a source of contention with other organisations was the Prometheus Radio Project's need for such installations to "lead to a more egalitarian distribution of expertise" (178) in the same way that their work with radio did. The control of infrastructure that a wifi network might produce within a community, and the potential for "networks as platforms for community media rather than Internet connectivity" (181) clearly aligns, for these activists, with the production of radio stations that respond to local conditions and needs.

The sideBand node can also act as a platform for community media. This is a condition inherent in the way routers follow protocological standards. The functionality this provides might be familiar to anyone who has configured a LAN party, set up home automation devices or attempted to wrangle printer software. In public/professional settings there might be additional port settings that interrupt complicate matters, and local device firewall settings can also add wrinkles.

A possible use of this could be to share personal shadow libraries, or create ad-hoc local web-rings. Using popular software, such as Calibre (Goyal) and Visual Studio Code's Live Server (Dey) extension makes both of these examples relatively simple. As mentioned earlier in the in the Admission section, this becomes practical in settings where the accessible networks are subject to particular censorship or surveillance.

The microcontrollers that sideBand uses have found similar uses in various artistic and experimental projects. Philadelphia based artist space, Iffy Books published a zine which, referencing the work of Dennis de Bel and Melissa Merritt details instructions for a pocket wifi portal. The uses suggested by Iffy Books include:

  • Share an article/essay/political slogan with anyone who happens to be at the coffee shop.
  • Promote an upcoming event without using social media.
  • Share a poetry anthology with other commuters on your train.
  • Share maps and information on a hiking trip without cell reception.
  • Use several wifi boards to send a message using SSID names alone.

These playfully construct a variety of ad-hoc publishing techniques and scenarios, which the zine format also contributes to.

There is a direct link in the formation of these projects to the practices of router hacking that emerged in 2004, building on the OpenWRT protocol. This gained prominence with PirateBox (Darts, Strubel), which was further developed by forked repositories such as LibraryBox (Griffey), both no longer maintained. Projects such as Bibliotecha (Buzova et al) perform similar functions – that is the sharing of media through offline networks. The specific mode of user participation within such networks is a key aspect of the value of these projects in as the Bibliotecha developers write:

When you’re going parallel/serve locally like an actual physical library, the content comes from a particular context and is served in this context. There is a real purpose to it, it is not just sharing for the sake of sharing. (Laforet et al 44)

[4]

Beyond the scope of Wi-Fi, other protocols offer similar potential for experimental networking. To give two examples: Near Field Contact is notable particularly for the fact that the device holding the data is powered by the device receiving the data, through the process of inductance when it receives the radio signal. This results in very constrained data limits, however the devices are now ubiquitous, used extensively in stores to tag items. It is also utilised by smartphones, to perform contactless payments – meaning that these devices can read and write these chips. Secondly, LoRa (Cycleo, Semtech), which is visibly similar to the sideBand boards (usually utilising the newer ESP chip, ESP32) that is a proprietary technique for long range wifi, often conceived as forming mesh networks that pass data node to node, circumventing big tech and state infrastructures.

This limited sideBand experiment performed a small intervention that affirmed the practices developed by the ServPub collaborators. It would be interesting to deploy the boards again in other settings, and expose their code to other groups of people and networking devices.. Operating in these ways, and with performative functions the sideBand network follows a Luddite logic of unpicking and the creation of space for refusal. The project includes the active dismissal of certain processes and tools, to open the exploration of other strategies and practices, better suited to commonality or the process of imagining such a thing. It remains open to other channels of data, shadow libraries, local wikis or static sites. It both requires and reaffirms a local presence and intimacy with the infrastructure.

The material conditions of networking infrastructure continue to be a focus within research and arts practices, shaping the development of concepts such as permacomputing, and a wider revival of Luddism. As was clear to the participants of "Are You Being Served?", an event dedicated to a Feminist review of mesh, cloud, autonomous, and DIY servers:

The necessary infrastructure that is put in place effects our understanding of place, both virtually and physically and it has become increasingly difficult to be intimate with the technologies that we feel familiar with. (Laforet et al 4)

When so much of life is contingent on the so called "connectivity" of the internet and subject to ever greater modes of extraction, a template of autonomous zones and fragmented protocols appears as a necessary practice. The methodologies of feminist servers shared by ServPub and sideBand clearly provide an effective route to gaining this proximity to infrastructure and the tools to shape other forms.

Figure 8: Wemos D1 mini underside.
Figure 9: Wemos D1 mini topside.

Notes

  1. ServPub is inclusive of hardware, software and the broad constellation of people engaged in this careful practice. It is a collective of collectives, comprised of SysterServer, In-grid, Creative Crowd/Varia, Centre for the Study of the Networked Image at London South Bank University, UCL Slade Schoool of Art, and SHAPE at Aarhus University.
  2. Another protocol, Secure Shell (SSH) allows commands to be sent securely between computers across an unsecured network.
  3. The process of deciphering programming from photography of chips is commonly part of reverse-engineering efforts aimed at proprietary and/or old hardware (Ilmer; Shirriff).
  4. This is documented in the publication by Constant, “Are You Being Served?” (2014), which followed a 2013 event "dedicated to a Feminist review of mesh, cloud, autonomous, and DIY servers".

Works cited

Andersen, Christian Ulrik, and Søren Bro Pold. The Metainterface. The MIT Press, 2018.

Anti-Colonial Tech Panel. 2024, https://pad.riseup.net/p/r.5e8fb6bd54cdce773db487845244e55d.

Arboleda, Martin. Planetary Mine. Verso, 2020.

Barragán, Hernando. Wiring. 2003.

BBC. "Von Neumann Architecture." BBC Bitesize. https://www.bbc.co.uk/bitesize/guides/zhppfcw/revision/3. Accessed 12 Mar. 2024.

Benchoff, Brian. "New Chip Alert: The ESP8266 WiFi Module (It’s $5)." Hackaday, 26 Aug. 2014. https://hackaday.com/2014/08/26/new-chip-alert-the-esp8266-wifi-module-its-5/

Berends, Manetta, and Simon Browne. Wiki4print. 2023. https://wiki4print.servpub.net/index.php?title=Wiki4print.

Bez, Roberto, Emilio Camerlenghi, Alberto Modelli, Angelo Visconti. "Introduction to Flash Memory". Proceedings of the IEEE, vol. 91, no. 4, Apr. 2003, pp. 489–502. IEEE Xplore, https://doi.org/10.1109/JPROC.2003.811702

Binfield, Keven. Writings of the Luddites. John Hopkins University Press, 2004.

Buzova, Yoana, Lasse van den Bosch Christensen, André Castro, Lucia Dossin, Max Dovey, Michaela Lakova, Martino Morandi, Ana Luísa Moura, Lídia Pereira, Roel Roscam Abbing, et al. Bibliotecha. 2013. https://web.archive.org/web/20210927121215/https://bibliotecha.info/

Content/Form, A Peer-Reviewed Newspaper, vol. 13, no. 1, 2024 2024, https://darc.au.dk/fileadmin/DARC/newspapers/Content-Form%20A-Peer-Reviewed-Newspaper-Volume-13-Issue-1-2024.pdf

Counter Cloud Action Day. 2024, https://www.tecnosandias.org/8m.

Courbon, Franck, Sergei Skorobogatov and Christopher Woods. "Reverse Engineering Flash EEPROM Memories Using Scanning Electron Microscopy". Springer, 2017. https://doi.org/10.17863/CAM.7164.

Darts, David, and Matthias Strubel. PirateBox. 2011, https://piratebox.cc/start.

Dey, Ritwick. Live Server. 2017, https://ritwickdey.github.io/vscode-live-server/.

Dunbar-Hester, Christina. Low Power To The People. The MIT Press, 2014.

Espressif. ESP8266ex Datasheet. Espressif, 2023, https://www.espressif.com/sites/default/files/documentation/0a-esp8266ex_datasheet_en.pdf.

Etherpad. Etherpad Foundation, 2008, https://etherpad.org.

Feranec, Robert. "How to Make Custom ESP32 Board in 3 Hours". 2023, https://www.youtube.com/watch?v=S_p0YV-JlfU.

Flynn, Elizabeth Gurley. Sabotage, The Conscious Withdrawal of the Workers’ Industrial Efficiency. IWW Publishing Bureau, 1916, https://archive.iww.org/history/library/Flynn/Sabotage/.

Fouilloux, Anne. 'Introduction to the Internet of Things (IoT): ESP8266 Architecture and Arduino GUI". https://annefou.github.io/IoT_introduction/02-ESP8266/index.html.

Galloway, Alexander R. Protocol. The MIT Press, 2004.

Goyal, Kovid. Calibre. 2006, https://calibre-ebook.com.

Griffey, Jason. LibraryBox. 2012, https://jasongriffey.net/librarybox/.

Halfacree, Gareth. "Raspberry Pi Confirms a Planned IPO, But Says Hobbyists Will Remain 'Incredibly Important'". Hackster.Io, https://www.hackster.io/news/raspberry-pi-confirms-a-planned-ipo-but-says-hobbyists-will-remain-incredibly-important-f7b9625e0d52. Accessed 8 Mar. 2024.

Hay, Douglas, Peter Linebaugh and E. P. Thompson. Albion’s Fatal Tree. Allen Lane, 1975.

Iffy Books Pocket Wifi Portal Zine. Iffy Books, 2022, https://iffybooks.net/pocket-wifi-portal.

Illich, Ivan. Tools for Conviviality. Harper & Row, 1973.

Ilmer, Veniamin. Veniamin-Ilmer/Decoding_rom. 2024. 6 Mar. 2024. GitHub, https://github.com/veniamin-ilmer/decoding_rom.

Karagianni, Mara. "Systerserver Diagram." Content/Form, A Peer-Reviewed Newspaper, vol. 13, no. 1, 2024, p. 1.

Laforet, Anne, Marloes de Valk, Madeleine Aktypi, An Mertens, Femke Snelting, Michaela Lakova, Reni Höfmuller (Eds.). "Are you being served? (notebooks)". Constant, 2014, https://calibre.constantvzw.org/book/17.

LoRa. Cycleo. Semtech, 2014.

Ludd, Ned. "Luddite Bicentenary: 12th January 1813: Baron Thomson Passes Sentence on the Convicted Luddites". Luddite Bicentenary, 12 Jan. 2013, https://ludditebicentenary.blogspot.com/2013/01/12th-january-1813-baron-thomson-passes.html.

Kara, Siddharth. Cobalt Red. St Martins Press, 2022.

Malm, Andreas. How to Blow Up a Pipeline. Verso, 2021.

Manske, Magnus, and Lee Daniel Crocker. MediaWiki. Wikimedia Foundation, 2002.

Marriott, Nicholas. Tmux. 2007.

Pattnayak, Tapan, and Guhapriyan Thanikachalam. "Antenna Design and RF Layout Guidelines". Infineon Technologies, 2018, http://www.cypress.com/go/AN91445

Reas, Casey, and Ben Fry. Processing. 2001.

Rinke, Andreas, and Christoph Steitz. "Around 200,000 Gather across Germany in Latest Protests against Far-Right". Reuters, 3 Feb. 2024. https://www.reuters.com/world/europe/least-120000-gather-berlin-latest-round-protests-against-far-right-2024-02-03/.

Shaw, Aurynn. "A Case Study on Raspberry Pi's Incident on the Fediverse." Eiara Limited - Sustainable DevOps, https://eiara.nz/posts/2022/Dec/09/a-case-study-on-raspberry-pis-incident-on-the-fediverse/. Accessed 27 Mar. 2024.

Sheldon, Robert, and Carol Silwa. "NAND Flash Wear-Out." Tech Target, https://www.techtarget.com/searchstorage/definition/NAND-flash-wear-out. Accessed 12 Mar. 2024.

Shiffman, Daniel. "Interview with Casey Reas and Ben Fry." Rhizome, 23 Sept. 2009, https://rhizome.org/editorial/2009/sep/23/interview-with-casey-reas-and-ben-fry/.

Starosielski, Nicole. The Undersea Network. Duke University Press, 2015.

Sumi, Denise Helene. "Pirate Care and Usable Politics and Pedagogies". Content/Form, A Peer-Reviewed Newspaper, vol. 13, no. 1, 2024, p. 13. https://darc.au.dk/fileadmin/DARC/newspapers/Content-Form_A-Peer-Reviewed-Newspaper-Volume-13-Issue-1-2024.pdf

Thacker, Eugene. "Introduction to Protocol." In Alexander R. Galloway, Protocol. The MIT Press, 2004.

Thompson, E. P. The Making of the English Working Class. Pantheon Books, 1980.

Upton, Eben. "Production and Supply-Chain Update". Raspberry Pi, 4 Apr. 2022, https://www.raspberrypi.com/news/production-and-supply-chain-update/.

"What Is Arduino?", Arduino, 2018, https://www.arduino.cc/en/Guide/Introduction.

Biography

Mateus Domingos is a PhD student at the Centre for the Study of the Networked Image, London South Bank University. His research explores the history of the Luddites and the computational technologies used within activist arts practices today. He is based in Leicester, where he works as an art technician and is a member of artist-run Two Queens. ORCID id: 0009-0001-1268-189X