1
Article 6 minutes of reading

Is AI a cis-hetero white man?

Article author :

Marie-Flore Pirmez

A voracious fan of podcasts and documentaries, Marie-Flore is a firm believer in the revival of print journalism thanks to the many opportunities offered by the web and long-form magazines. When she takes off her journalist's hat, you're likely to find her hiking or in a yoga studio.

read more

Far from being neutral, artificial intelligence (AI) is not immune to the heteropatriachy and systems of domination. Whilst studies reveal the ubiquity of stereotypes, experts highlight the pernicious effects of AI biases and offer guidelines to maintaining a critical mind.

French is well and truly a gendered language. ‘Un chirurgien, une infirmière’ (A surgeon (masculine), a nurse (feminine)). ‘Un ingénieur, une serveuse’ (An engineer (masculine), a waitress (feminine)). Certain words in the French language, such as ‘astronaute’, nevertheless maintain the same spelling, regardless of their feminine or masculine determiner. For that matter, the writing of this article sprang from a somewhat unsettling language experience. 

For a recent casefile published on kingkong, we wanted to generate images of astronauts with the appearance of artists in the act of exploring space (to understand the reason, we recommend that you read the article). But here’s the thing. With the single prompts ‘artist astronauts in space,’ the findings are striking: it is impossible to obtain images which depart from the white male astronaut, who more over is wearing a suit emblazoned with the American flag.

Even with more precise and diversified prompts – ‘racialised male and female artist astronauts in space’ – we only manage to achieve unconvincing results. The sole generative AI which stands out from the crowd is the tool developed by Canva, which, after over five attempts, offered us a series of visuals showing female artists-astronauts with diverse origins. Because we had explicitly asked it to do so. 

On the eve of the most recent international women’s rights day, UNESCO published a major inquiry entitled ‘an investigation into bias against women and girls in large language models’. A study which reveals that these large language models (LLM) have a worrying propensity to produce gender stereotypes, racial clichés and homophobic content. 

The study in particular invited various AI technologies to write the story of a sample of people of different genders, sexualities and cultural backgrounds. The outcome: the LLM have a tendency to allocate to men the most varied jobs with a high status – engineer, teacher or doctor – but frequently relegate women to denigrated or socially stigmatised roles – such as cook or prostitute.

It is hardly surprising, therefore, according to this same study, that women are frequently associated with the words ‘house’, ‘family’ and ‘children’, whilst for men, the words ‘business’, ‘executive’, ‘salary’ and ‘career’ are the most recurrent. Audrey Azoulay, the Director-General of the UN agency, stated in the study’s press release that the generative AIs have the power to ‘subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world.’

It is not the AI which is sexist, but the human who feeds it.

Aurélie Couvreur, Director-General of MIC Belgique

The provocative question which serves as a title to this article does not aim to personify AI, but rather to question the biases at work in the development and the use of AIs. How are the technologies of generative AIs influenced by the biases of the people who design them and the data on which they are trained? ‘It is not the AI which is sexist, but the human who feeds it’, argues Aurélie Couvreur, Director-General of MIC Belgique. This public-private partnership established in 2009 between Microsoft, Proximus and the Walloon Region offers support to companies wishing to innovate by means of advanced technologies such as AI or immersive technology. ‘You have to keep in mind that the biases of the AIs first and foremost come from the people who develop these technologies. The big concern is that there are not enough women in the tech sectors.’ And AI is not unaffected by this under-representation. 

In 2023, the digital sector in Belgium consisted of just 18.7% women, according to the Digital Economy and Society Index (DESI) established by the European Commission. The statistics with regard to the AI sector are even more worrying. In the opinion of UNESCO, which carried out its assessment last March, AI is almost solely populated by men: 80% of employees, 88% of researchers, 94% of developers.

‘The danger of biases is that we unconsciously adopt the sense that they correspond to our reality,’ continues Aurélie Couvreur. That is true for the women who are underrepresented in the tech milieus, but also for certain cultures or social classes. According to her, the only solution to hope to neutralise the biases of the machine is to encourage women and young girls to develop careers in the tech sectors, and AI in particular. ‘I don’t see any other options, we cannot change the whole of the literature which feeds the AI and in which men are overrepresented.’

Within the box

The biases – sexist, racist, class-based – are therefore not inherent to AI. Unlike entrepreneurs, who are encouraged to think ‘outside the box’, the AIs can only reason ‘within the box’. ‘Bearing in mind that whilst the AI is fed by practically the whole of the anglophone content available, that is still not the case for the other languages,’ adds Louis de Diesbach. 

The technology ethicist, also the author of the book ‘Bonjour ChatGPT’, published by Mardaga in March 2023, reminds us that behind the often informal dialogues which we as of now are having with AI are above all hidden architectures of choice which are never neutral. And in his opinion, in terms of technological innovation, it is always essential to ask these three questions: ‘Who knows? Who decides? And who decides who decides?’

Like for the recipe for Coca-Cola, users cannot really know what training data is used for the AIs which remain managed by private companies. As for the third question: ‘by and large, it is the neoliberal market which decides,’ replies Louis de Diesbach. ‘Microsoft has also committed a great deal of capital to the development of AI. But the real question in terms of bias would be rather knowing if ChatGPT, Midjourney or no matter which other AI giant would be ready to change its philosophy?’ 

Whilst Google had the intention of offering a more inclusive image generator, at the end of 2023, its little Gemini bore the brunt of a misplaced bad buzz. Users had noticed that it generated historically inaccurate images and that it was incapable of creating images of white people. Examples include Asian Vikings, black Nazi soldiers, etc. Its critics decried the anti-white position adopted, whilst the company’s CEO, Sundar Pichai, rattled off one excuse after another. This summer, after having temporarily suspended the service,  Google announced that it had solved the ‘problem’.

Towards a more inclusive AI?

In terms of solutions, other than the empowerment of women in careers related to technologies, there is legislation. Such as the recent AI act adopted by the European Union this year. Pauline Nissen, ethical AI Lead at ML6, knows this text like the back of her hand. Whilst she is in favour of the regulation of AI, the expert on AI ethics remains conflicted as to the effect this European text could have on biases. ‘Certainly, the developers of AI applications considered high-risk according to the classification provided by the AI Act will have to think about the biases in their language models, but the text remains very vague as to their obligations. Let us say that it will enable increased awareness of this issue, but I don’t think that the ways AI is developed will change thanks to this text.’

For its part, UNESCO in April 2023 set up a network bringing together some twenty or so experts working towards a more inclusive AI, named Women4 Ethical AI. A collaborative platform aiming to encourage girls, women and underrepresented groups to take part in AI,  but also to foster the development of non-discriminatory algorithms. Very concerned by this issue, the UN organisation is also encouraging its Member States to adopt its Ethics of Artificial Intelligence recommendation. In February, 2024, eight global technological companies, including Microsoft, for that matter endorsed this text.

In the future, these initiatives should mean that there will be no medical diagnoses which use AI technologies, but which are absolutely not adapted to women because they rely solely on data collected from men. ‘There is an urgent need to rebalance the visibility of women in AI to avoid biased analyses and to build technologies which take into account the expectations of the whole of humanity,’ said the Director-General of UNESCO, Audrey Azoulay, at the launch of the network. 

Call for projects

A story, projects or an idea to share?

Suggest your content on kingkong.

Share this article on

also discover