If you type ‘picture of a nurse’ into the text-to-image tool Midjourney, the result is remarkable: the AI only generates images of young women with long hair, all of them white and standardized beauty. This means that the tool not only reproduces stereotypes, it also fails to depict social diversity and overcome the ‘male gaze’, the male-sexualized view. This is precisely what Professor Torsten Schön and Professor Matthias Uhl want to change with their interdisciplinary research project ‘EvenFAIr’.
Computer vision professor Schön and AI ethicist Uhl have set themselves the goal of implementing fairness and diversity criteria. ‘More and more people are using generative models such as ChatGPT, DALL-E or Midjourny,’ explains Schön. ‘However, these tools do not take fairness criteria into account, but instead reproduce and in some cases reinforce prejudices.’ This is a cause for concern, as the images generated have a demonstrable influence on users' opinions. ‘In addition, safety-critical applications require detailed consideration of fairness criteria to ensure that no groups of people are disadvantaged. It is fatal if dark-skinned people are less easily recognized by AI algorithms in road traffic and therefore have a higher risk of being overlooked by autonomous vehicles.’
It is therefore extremely important to develop a methodology to make fairness in AI models measurable and to be able to intervene in the training process during development. The aim of Schön and Uhl's research project, which was conceived in cooperation with e:fs, is to find ways to check generative AI models for fairness criteria in a standardized way and to have a toolset that establishes these criteria in the training process to ensure fair generative AI models: in short, an AI without bias.