"You'll be a nurse, my son!" Automatically Assessing Gender Biases in Autoregressive Language Models in French and Italian
Résumé
Language models are now massively used for a variety of tasks, including open-ended generation and writing assistance. However, generated texts can encapsulate biases and harm users. A variety of articles aim at detecting, measuring and mitigating stereotypical biases, but focus mainly on English and on pre-training tasks. Thus, we propose a framework to automatically measure gender biases generated by language models in inflected languages, in a practical setting. Herein, we report experiments using this framework on seven autoregressive language models used to generate more than 52,000 cover letters in French, addressing 203 industry and sectors, and over 4,100 cover letters in Italian, on 55 sectors. Associations between occupation and gender are studied using a system that we introduce to automatically identify morpho-syntactic gender markers in text. Results suggest that all models are strongly biased towards the generation of texts containing masculine gender markers. Overall, generated texts contain twice as many masculine (vs. feminine) markers in French, and eight times as many in Italian. Models also exacerbate gender stereotypes that are evidenced in social science studies and associate feminine inflections with occupations related to care, children and physical appearance, whereas occupations that require physical, technical and manual skills are strongly associated with masculine markers.
Domaines
Traitement du texte et du documentOrigine | Fichiers produits par l'(les) auteur(s) |
---|