Toward a Minor Tech:Inga-500

From creative crowd wiki
Jump to navigation Jump to search


The Democratization of Machine Learning

Inga Luchs

By discriminating information from masses of data, machine learning algorithms are applied in the ranging of search engine results, the filtering of spam e-mails, and the recommendation of content, but also in the detection of credit card fraud and in crime prediction. These systems are built on the belief that “the world is ‘knowable’ and computationally simulatable, and that computers will be able to process the messiness of the real world […]” (Ito 2017). To do so, in the training of ML systems, large amounts of collected data are processed through operations described as smooth and efficient, each iteration in the learning process being an attempt of optimisation for the ‘best possible fit’. However, these systems cannot hold what they promise: as they cannot be separated from the cultural sphere in which they operate, they are not only mirroring biases already existent in society, but are further deepening them, resulting in the discrimination of people along lines of race, class and gender (Apprich et al. 2019).

As seeming countermeasure, US-based big tech companies are striving for AI ‘democratization’, promising “universal, all-inclusive accessibility, participation, and transparency” (Sudmann 2019). This entails a growing simplification and automation of ML interfaces and platforms, the open-source provision of infrastructures and the offer of free educational resources. This ‘democratization’ of AI can, however, only be understood as measures to ensure for company-owned products to be the main means for the development of ML and to advance their infrastructural power (Dyer-Witheford et al. 2019). As a result, the research, development and learning of ML is heavily informed by a capitalist logic, significantly shaping the problematic impact of ML operations.

Rather than providing access to everyone, what we need to strive for is a radically different approach to the creation of ML systems which breaks with the values of scale, optimization and efficiency that have been nourished for decades. One way might be to follow Anna Tsing’s ‘nonscalability theory’ as “alternative for conceptualizing the world” which “pays attention to the mounting pile of ruins that scalability leaves behind” (Tsing 2012). For machine learning, this could mean to acknowledge the limitations that it poses – concerning the messiness of reality and the impossibility of lossless translation, but also the messiness of the ML process itself, dealing with dirty data and the political notion of discrimination.

But also in practically engaging with the technology – in learning to do machine learning and in interacting with its platforms, libraries and datasets – we need to strive for critical practices. We should oppose big tech’s tendency to hide away ML operations behind obfuscating interfaces that we are the users of, and look behind them in order to gain a deeper understanding of the technical operations and to acknowledge their embeddedness in our world. In fully understanding this condition, we sooner or later need to ask: is machine learning the best possible way to do data filtering and classification – or might we rather seek for other technological means that are not intrinsically built on notions of scalability?