Content Form:APRJA 13 Pierre Depaz: Difference between revisions

From creative crowd wiki
Jump to navigation Jump to search
mNo edit summary
Line 10: Line 10:
This article investigates how the word embeddings at the heart of large language models are shaped into acceptable meanings. We show how such shaping follows two educational logics. The use of benchmarks to discover the capabilities of large language models exhibit similar features to Foucault’s disciplining school enclosures, while the process of reinforcement learning is framed as a modulation made explicit in Deleuze’s control societies. The consequences of this shaping into acceptable meaning is argued to result in semantic subspaces. These semantic subspaces are presented as the restricted lexical possibilities of human-machine dialogic interaction, and their consequences are discussed.
This article investigates how the word embeddings at the heart of large language models are shaped into acceptable meanings. We show how such shaping follows two educational logics. The use of benchmarks to discover the capabilities of large language models exhibit similar features to Foucault’s disciplining school enclosures, while the process of reinforcement learning is framed as a modulation made explicit in Deleuze’s control societies. The consequences of this shaping into acceptable meaning is argued to result in semantic subspaces. These semantic subspaces are presented as the restricted lexical possibilities of human-machine dialogic interaction, and their consequences are discussed.


== Section title ==
== Introduction ==
 
When following the direction from ''man'' towards ''programmer'' in a space composed of word vectors, computational linguists Bolukbasi et al. encountered a problem - the resulting value when starting from ''woman'' was ''homemaker'' (Bolukbasi et. al., 2016). In order to correct this mistake (''programmer'' should be to ''woman'' as ''programmer'' is to ''man''), they developed algorithms to “de-bias” word embeddings—the vector representation of text—and thus provide a different configuration of words that would be considered less sexist.
 
Word embeddings are ways to organize words in space such that their proximity or distance to other words holds semantic information. However, an unwanted proximity or distance might be interpreted as bias by researchers and users alike (Noble, 2018; Bender et. al., 2021; Steyerl, 2023), and can be understood as a sense-making problem, in which a given semantic output does not correspond to the expectation. And yet, as Bolukbasi and their colleagues show, it is possible to reconfigure semantic fields such that they make more acceptable sense. This article investigates how word embeddings, as used in large language models (LLMs), are the result of ''shaping processes'', and how these shaping processes are akin to educational processes.
 
We define shaping processes as the different steps in the development of a technical artefact, in order to modify both its function and user perceptions. This article focuses on two specific processes, benchmarking and reinforcement learning, to highlight the overall tendency in which such shaping processes inscribe themselves. As such, the central question we address is: ''under which logic do shaping processes take place? How are technical processes implementing such logics in order to discover meaning-making capabilities in LLMs? And who determines the kind of sense that is being made by a large language model?'' We hypothesize that these processes can be productively analyzed through the dual lens of ''discipline'' and ''control'', as put forth, respectively, by Michel Foucault (Foucault, 1993) and Gilles Deleuze (Deleuze, 1992), particularly in their discussion of education; through this, we show that shaping logics, when it comes to generative cognitive technologies, influence the development and assessment of meaning-making abilities both in the machine and the human.
 
We begin by exploring how meaning can be encoded digitally by making the relationship between syntax and semantics in computer environments explicit. By comparing binary encoding and vector encoding, we highlight the complexities of the latter, particularly when assessing meaningfulness. We then trace how those vectors are being shaped — that, is being rendered operationally meaningful — within LLMs. Specifically, we pay attention to two particular steps in the creation process of an LLM: benchmarking and reinforcement learning. We highlight how these techniques, a combination of discipline and control, contribute to normalization and standardization of meaning, but also from its modulation and adaptation, and result in semantic ''subspaces''.
 
Discussing Alan Turing’s proposal of machine intelligence as an educational problem, we conclude by turning to theories of co-construction of intelligence (Bachimont, 2004; Stiegler, 2010) to sketch out, through examples of linguistic normalization, hallucinations, and prompting, how such word embeddings can operate logics of control themselves.
 
 
 
 
 
 
 
 
 
 
 


Finally, we sketched<ref>I have corrected the spelling here</ref> out how such combination of discipline and control in shaping word embeddings can affect users. Through dialogic interaction, the user probes the spatial configurations of meaning, but the exact topology of these configurations nonetheless remains elusive, and can thus impact what can be said, and what can be imagined, a new addition to the existing challenges of linguistic expression in the era of computation.<ref>And this is the end of the article.</ref>
Finally, we sketched<ref>I have corrected the spelling here</ref> out how such combination of discipline and control in shaping word embeddings can affect users. Through dialogic interaction, the user probes the spatial configurations of meaning, but the exact topology of these configurations nonetheless remains elusive, and can thus impact what can be said, and what can be imagined, a new addition to the existing challenges of linguistic expression in the era of computation.<ref>And this is the end of the article.</ref>

Revision as of 10:15, 13 August 2024


Pierre Depaz

Shaping Vectors

Discipline and Control in Word Embeddings

Abstract

This article investigates how the word embeddings at the heart of large language models are shaped into acceptable meanings. We show how such shaping follows two educational logics. The use of benchmarks to discover the capabilities of large language models exhibit similar features to Foucault’s disciplining school enclosures, while the process of reinforcement learning is framed as a modulation made explicit in Deleuze’s control societies. The consequences of this shaping into acceptable meaning is argued to result in semantic subspaces. These semantic subspaces are presented as the restricted lexical possibilities of human-machine dialogic interaction, and their consequences are discussed.

Introduction

When following the direction from man towards programmer in a space composed of word vectors, computational linguists Bolukbasi et al. encountered a problem - the resulting value when starting from woman was homemaker (Bolukbasi et. al., 2016). In order to correct this mistake (programmer should be to woman as programmer is to man), they developed algorithms to “de-bias” word embeddings—the vector representation of text—and thus provide a different configuration of words that would be considered less sexist.

Word embeddings are ways to organize words in space such that their proximity or distance to other words holds semantic information. However, an unwanted proximity or distance might be interpreted as bias by researchers and users alike (Noble, 2018; Bender et. al., 2021; Steyerl, 2023), and can be understood as a sense-making problem, in which a given semantic output does not correspond to the expectation. And yet, as Bolukbasi and their colleagues show, it is possible to reconfigure semantic fields such that they make more acceptable sense. This article investigates how word embeddings, as used in large language models (LLMs), are the result of shaping processes, and how these shaping processes are akin to educational processes.

We define shaping processes as the different steps in the development of a technical artefact, in order to modify both its function and user perceptions. This article focuses on two specific processes, benchmarking and reinforcement learning, to highlight the overall tendency in which such shaping processes inscribe themselves. As such, the central question we address is: under which logic do shaping processes take place? How are technical processes implementing such logics in order to discover meaning-making capabilities in LLMs? And who determines the kind of sense that is being made by a large language model? We hypothesize that these processes can be productively analyzed through the dual lens of discipline and control, as put forth, respectively, by Michel Foucault (Foucault, 1993) and Gilles Deleuze (Deleuze, 1992), particularly in their discussion of education; through this, we show that shaping logics, when it comes to generative cognitive technologies, influence the development and assessment of meaning-making abilities both in the machine and the human.

We begin by exploring how meaning can be encoded digitally by making the relationship between syntax and semantics in computer environments explicit. By comparing binary encoding and vector encoding, we highlight the complexities of the latter, particularly when assessing meaningfulness. We then trace how those vectors are being shaped — that, is being rendered operationally meaningful — within LLMs. Specifically, we pay attention to two particular steps in the creation process of an LLM: benchmarking and reinforcement learning. We highlight how these techniques, a combination of discipline and control, contribute to normalization and standardization of meaning, but also from its modulation and adaptation, and result in semantic subspaces.

Discussing Alan Turing’s proposal of machine intelligence as an educational problem, we conclude by turning to theories of co-construction of intelligence (Bachimont, 2004; Stiegler, 2010) to sketch out, through examples of linguistic normalization, hallucinations, and prompting, how such word embeddings can operate logics of control themselves.







Finally, we sketched[1] out how such combination of discipline and control in shaping word embeddings can affect users. Through dialogic interaction, the user probes the spatial configurations of meaning, but the exact topology of these configurations nonetheless remains elusive, and can thus impact what can be said, and what can be imagined, a new addition to the existing challenges of linguistic expression in the era of computation.[2]

Caption for example PNG image

Notes

This article has benefited greatly from thorough discussions with, and copy edits by, Sara Messelaar Hammerschmdit.

  1. I have corrected the spelling here
  2. And this is the end of the article.

Works cited

Biography

Pierre Depaz is currently a Lecturer of Interactive Media at NYU Berlin. His research focuses on understanding how software operates procedural translation of non-computational entities, and how it affects humans’ perceptions and affordances with the world. ORCID: https://orcid.org/0009-0009-1489-247X