Toward a Minor Tech:Luchs

From creative crowd wiki
Jump to navigation Jump to search

The Democratization of Artificial Intelligence and Machine Learning

Machine learning has grown to be a central area of artificial intelligence in the last decades. Ranging from search engine queries, over the filtering of spam e-mails and the recommendation of books and movies to the detection of credit card fraud and predictive policing, applications that are based on machine learning algorithms are taking over the classification tasks of our everyday life. These algorithmic operations, however, cannot be separated from the cultural sphere in which they emerge. Consequently, algorithms are not only mirroring biases already existent in society, but are further deepening them, consolidating race, class, and gender as immutable categories (Apprich et al. 2019).

In our current situation, big tech heavily shapes AI research and development. In the US context, companies such as Google and Microsoft profit from a tremendous position of power due to their control over cloud computing, large data sets and AI talent (Dyer-Witheford et al. 2019). Further, they increasingly invest into offering AI- and ML-as-a-service, providing ready-to-use AI technologies and renting out their vast infrastructures for the training and development of ML models (Srnicek 2019). With respect to issues of algorithmic discrimination (O’Neil 2016; Eubanks 2017), the dominance of big tech in the development of ML is crucial because who is developing AI systems is significantly shaping how AI is imagined and developed – and these spaces “tend to be extremely white, affluent, technically oriented, and male.” (West et al. 2019, 6)

Countering this problem, many critical media researchers plead for a participatory approach, including more diverse communities into the creation of AI systems (Costanza-Chock 2018; Benjamin 2019; D’Ignazio and Klein 2020). Thus, a suggested solution to big tech’s dominance within the AI industry is often a call to make AI more accessible and to place the technologies in the hands of the public (Verdegem 2022). However, it is exactly this discourse that big tech has appropriated: they, too, aim to ‘democratize’ AI with the “promise of universal, all-inclusive accessibility, participation, and transparency.” (Sudmann 2019, 23) This entails a growing simplification and automation of AI, the open-source provision of corporate ML infrastructures as well as the offer of free educational resources, for instance in form of ML introductory courses and training certificates. This ‘democratization’ of AI can, however, only be understood as measures to ensure for company-owned products to be the main means for the development of ML. Or, as Dyer-Witheford et al. describe: “If AI becomes generally available, it will still remain under the control of these capitalist providers.” (Dyer-Witheford et al. 2019, 56) At this point, even companies that decidedly oppose the aspirations of big tech are deeply intertwined with its ML infrastructures. The US-American AI company Hugging Face, for instance, sees itself “on a journey to advance and democratize artificial intelligence through open source and open science”, opposing itself against big tech which has not had “a track record of doing the right thing for the community”. However, with a closer look, it becomes evident that Hugging Face, too, cannot circumvent corporate software libraries and models.

How, then, can we approach the practice of ML development from a ‘minor’ tech perspective? While access and community-based development is certainly important, ML algorithms equally need to be considered beyond their instrumental notion. It is thus not enough to simply hand over the technology to the community – we need to think about how we can conceptualize a radically different approach to the creation of ML systems which breaks with values that have been nourished for decades and that are deeply intertwined with ML research, development and education. One of the key issues at play here, is that current ML systems are built on the notion of scalability and grounded on the belief that “the world is ‘knowable’ and computationally simulatable, and that computers will be able to process the messiness of the real world just like they have every other problem that everyone said couldn’t be solved by computers” (Ito, 2017, p. 4). Collected data is thus equated with truth – and as the narrative goes, the more data, the better. In this way, both the quantity of captured data rises equally as the needed computational resources to do machine learning grows extensively. And the notion of scalability is also reflected in the advancement of big tech’s infrastructural power as described above – here, ML infrastructures are posed as “uniform blocks, ready for further expansion” (Tsing 2012, 505), making it nearly impossible to circumvent big tech’s products when creating ML systems.

The way we are constructing the world is shaping how we make our world, as Anna Tsing (2012, 506) explains. Thus, if the notion of scalability lies at the core of the technologies we use to discriminate information from data, and if these scalable systems are treated as neutral means to not only constitute our present but to predict our future, then the results will differ greatly from when we for instance seek for a ‘nonscalability theory’ as “alternative for conceptualizing the world” (Tsing 2012, 507) that “pays attention to the mounting pile of ruins that scalability leaves behind” (506). This could mean, for instance to acknowledge the limitations that ML poses – concerning the messiness of reality and the impossibility of lossless translation, but also the messiness of the ML process itself, dealing with dirty data and the political notion of discrimination (Apprich et al. 2019). More fundamentally, however, we must interrogate the technological foundations of machine learning and ask whether it is technically even possible to escape from a large-scale logic – and if not, what other ways are there for us to discriminate information from the abundance of data, both technically and conceptually?

References

Apprich C, Chun WHK, Cramer F and Steyerl H. (eds) (2019) Pattern Discrimination. Lüneburg/Minneapolis: meson press/Minnesota Press.

Benjamin, R (2019) Race after Technology. Abolitionist Tools for the New Jim Code. Cambridge, UK/Medford, MA: Polity Press.

Costanza-Chock, S (2018) Design justice: Towards an intersectional feminist framework for design theory and practice. Proceedings of the Design Research Society.

D’Ignazio C and Klein LF (2020) Data Feminism. Cambridge, MA: MIT Press.

Dyer-Witheford N, Kjosen AM and Steinhoff J (2019) Inhuman Power. Artificial Intelligence and the Future of Capitalism. London, UK: Pluto Press.

Eubanks V (2017) Automating Inequality. New York: St. Martin’s Press.

Ito J (2018) Resisting Reduction: A Manifesto. Journal of Design & Science, December 2.

O'Neil C (2016) Weapons of Math Destruction. New York: Penguin Random House.

Seaver N (2019) The Political Economy of Artificial Intelligence. Recorded talk, Great Transformation, Jena, 23.09.-27.09. Available at: https://www.youtube.com/

watch?v=Fmi3fq3Q3Bo.

Sudmann, A (2019) The Democratization of Artificial Intelligence. Net Politics in the Era of Learning Algorithms. Bielefeld: transcript.

Tsing A (2012) On Nonscalability: The Living World is not Amenable to Precision-Nested Scales. Common Knowledge 18(3): 505–524.

Verdegem P (2022) Dismantling AI capitalism: the commons as an alternative to the power concentration of Big Tech. AI & Society, April 9.

West SM, Whittaker M and Crawford, K (2019) Discriminating Systems. Gender, Race, and Power in AI. AI Now Institute.