BLOG

In whose hands is your attention? The story of the algorithm of the oppressed

How does the algorithm work today, what were the expectations of algorithmic functioning, and what did we get as reality? Although there are a number of examples that have long been treated as a scientific and ethical problem, regulation is slow to catch up with technology and its development.

5347 views 30 reactions 0 comment(s)
Photo: Shutterstock
Photo: Shutterstock
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

After reading a sociological study Exiled Concepts by Todor Kuljić I began to think intensively about the concepts that define contemporary society, and especially about those that explore inequalities in society. Or, to paraphrase Kuljić here – how to recognize the social conditions of a new conceptualization of historical and contemporary experience? It seems that as an indicator for this analysis we can take a series of concepts that appear, year after year, as words that marked a certain period: brain rot (2024) generative artificial intelligence (2023) AI (2023) doomscrolling (2020) cancel culture (2019) sharing economy (2015) #blacklivesmatter (2014)… This The analysis seems to offer us the conclusion that the key concepts of contemporary society are necessarily related to digital culture, that is – that the trend of “conceptualizing contemporary experience” is directly related to the phenomena of media culture. If we reduce this lexical analysis to a single concept – the common denominator would be the one that unites all the aforementioned practices and phenomena: His Majesty the Algorithm.

Algorithm time

Although it (probably) originates from the 9th century and essentially belongs to a field limited by mathematics as a formal and exact science, the concept of algorithm has migrated over time to the field of social and human sciences, especially touching upon the field of digital humanities. Precisely because of this, a set of precisely defined steps for solving a problem (which is one of the definitions of an algorithm), has opened up a series of social problems that, viewed in the context of media literacy, Svetlana Jovetić Koprivica calls Tortura algorithm. And perhaps it is precisely in the process of media literacy that lies the fusion of rigorous STEM disciplines and critical reflection in the social sciences. From this fusion, among other things, ethical dilemmas and what we clumsily translate as algorithmic bias (algorithm bias).

How do algorithms choose for us?

However – how does the algorithm work today, what were the expectations of algorithmic functioning, and what did we get as reality? Algorithmic personalization or personalized algorithmic selection has marked the last two decades, and especially the era of social media. Ever since Google In 2004, it introduced personalized search (since 2009 – as a standard for all users, even those who are logged out of their accounts, and deleting cookies won't help much), a number of media platforms (Facebook, Instagram, TikTok…) entered the race – who would offer a “better” algorithm. The quality of an algorithm, essentially speaking, was assessed based on two criteria: how long it captures the user’s attention (attention economy) and how much money it brings to platforms/advertisers. Such criteria, in short, dehumanize the process of personalized content selection and testify to the dominance of lucrative content production patterns whose main goal is user exploitation. However, algorithmic personalization as a complex mechanism for processing big data on user behavior on the Internet emerged as a response to the accelerated accumulation of media material: some data indicate that today, over 400 terabytes (TB) of content are generated daily, which justifiably raises the question – what do we do with so much media content? The idea that platforms themselves, on our behalf and based on our data, select content that is useful to us or meets our needs may have once been interpreted as a strategy that would allow us to access content that is most relevant to us, i.e. that we as users have absolute control over personalization. Predictive analytics, in practice, has turned out to work exclusively for large corporations that have control over the content selection process, but also over the data that we, willingly or unwillingly, make available to them. This is evidenced by the hypothesis that it played a key role in the context of the referendum in the United Kingdom (Brexit) had exactly algorithmic filtering and information distribution which shaped the way the public perceived key issues of migration, sovereignty, the economy and EU bureaucracy.

When algorithms become unfair

The dystopian image of the algorithm is not limited to personalized selection and manipulation of our data, but also to systemic, recurring “errors” that are not (exclusively) controlled by humans or large centers of power. Errors are, in fact, unfair and discriminatory results that an automated system tends to produce, favoring one group of data over another. Such discrimination is not (necessarily) in the function of capital accumulation, but it is evidence that the algorithm is not neutral, that is, that bias is integrated through the design of the algorithm, the data, or the way in which the data is applied/processed.

Google introduced an application for recognizing content in images in 2015 and opened up the field of digital stereotypes: the algorithm recognized and labeled dark-skinned people as gorillas, which is why Google apologized.. Years later, Google, nor other global corporations like Apple a, do not allow gorilla recognition because they blocked that tag in their algorithms, thus only bypassing the problem and offering an apparent neutrality instead of a solution. A similar bias was shown by the COMPAS algorithm used to assess the risk of recidivism in some US states: research analysis showed that the algorithm more often gave falsely high risk scores to African-American defendants (compared to whites), meaning that they were twice as likely to be labeled as future offenders, while whites were more often (incorrectly) assessed as low-risk.

Amazon was developing an AI tool to review resumes to rank them based on performance, but it was quickly discovered that the algorithm discriminated against women. The data the algorithm was “trained” on was historically conditioned by the fact that the male dominance of the tech sector led to female candidates’ resumes receiving “penalty” points. Amazon suspended the implementation of the project, due to the risk that similar mistakes could be repeated. This type of sex and gender discrimination was also present in the codes for targeted advertising (Facebook) or credit scoring (Apple Card) who also favored a certain gender, although this was not their intention.

Although there are a number of examples that have long been treated as a scientific and ethical problem in academic circles, regulation has been slow to catch up with technology and its development. The US administration published a document several years ago entitled Blueprint for an AI Bill of Right with a number of non-binding principles concerning artificial intelligence, some of which specifically concern the prohibition of algorithmic discrimination. Some of these issues have been attempted to be addressed by European legislation through the General Data Protection Regulation (GDPR), the Digital Services Act (DSA) or the Artificial Intelligence Act (AI Act), but it is an open question how EU member states (and those that declaratively want to become them) will implement all this in everyday practice. Montenegrin legislation passively follows those media standards that also concern traditional media, so it is not expected that the issue of artificial intelligence and algorithmic bias will be an item on the agenda of the competent institutions, primarily the Government and Parliament of Montenegro.

What is obvious from these examples is that the development of technology, although it originates from the exact sciences, is not neutral, but strongly reflects and reflects social conditions and circumstances. Algorithmic bias is, therefore, only digitalized discrimination that has existed in society for decades and centuries, on all grounds – gender, gender, class, sexuality, race, etc. Thus digitalized, discrimination is no longer just a temporal category, but has also become spatial – by expanding to all media as extensions of our senses, as Marshall McLuhan would say. However, as such, it is more dangerous: precisely because of the assumption that computer decisions are objective and exact, they receive trust that they do not deserve, and their bias is more difficult to detect. Revealing apparent neutrality is the task of all of us, daily and everywhere – and not only so that AI systems would be fairer, more transparent and safer for society. Perhaps then, at least in a digital society, discrimination will be one of the – exiled concepts.

The author is a producer and university professor.

This text was written for the website medijskapismenost.me within the framework of the program of the Agency for Audiovisual Media Services

Bonus video:

(Opinions and views published in the "Columns" section are not necessarily the views of the "Vijesti" editorial office.)