Authors
- Jeffrey Brown*
- Tina Park*
- Jiyoo Chang*
- McKane Andrus*
- Alice Xiang
- Christine Custis*
* External authors
Venue
- EAAMO 2022
Date
- 2022
Attrition of Workers with Minoritized Identities on AI Teams
Jeffrey Brown*
Tina Park*
Jiyoo Chang*
McKane Andrus*
Christine Custis*
* External authors
EAAMO 2022
2022
Abstract
The effects of AI systems are far-reaching and affect diverse commu- nities all over the world. The demographics of AI teams, however, do not reflect this diversity. Instead, these teams, particularly at big tech companies, are dominated by Western, White, and male work- ers. Strategies for preventing harms done by AI must also include making these teams more representative of the diverse communi- ties that these technologies affect. The pipeline of students from K-12 and university level contributes to this - those with minori- tized identities are underrepresented or excluded from pursuing computer science careers. However there has been relatively little attention given to how the culture at tech companies, let alone AI teams, contribute to attrition of minoritized people in the workplace. The current study uses semi-structured interviews with minoritized workers on AI teams, managers of AI teams, and leaders working on diversity, equity, and inclusion (DEI) in the tech field (N = 43), to investigate the reasons why these workers leave these AI teams. The themes from these interviews describe how the culture and climate of these teams may contribute to attrition of minoritized workers, and strategies for making these teams more inclusive and representative of the diverse communities affected by technologies developed by these AI teams. Specifically, the current study found that AI teams in which minoritized workers thrive tend to foster a strong sense of interdisciplinary collaboration, support professional career development, and are run by diverse leaders who understand the importance of undoing the traditional White, Eurocentric, and male workplace norms. These go beyond the “quick fixes” that are prevalent in DEI practices.
Related Publications
The rapid and wide-scale adoption of AI to generate human speech poses a range of significant ethical and safety risks to society that need to be addressed. For example, a growing number of speech generation incidents are associated with swatting attacks in the United States…
Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models. HCCV datasets constructed through nonconsensual web scraping lack crucial metadata for comprehensive fairness and robustnes…
This paper strives to measure apparent skin color in computer vision, beyond a unidimensional scale on skin tone. In their seminal paper Gender Shades, Buolamwini and Gebru have shown how gender classification systems can be biased against women with darker skin tones. While…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.