Create Your First Project
Start adding your projects to your portfolio. Click on "Manage Projects" to get started
Benchmarking Impacts of Machine Learning Bias: Empowering Minorities for a Fair Future
Project type
Final Project Dissertation
Date
May 2023
Location
Manchester, England
Role
Individual Project
I have a fuelled passion for fighting against social injustice. As a minority in technology, a female of mixed ethnicity, and with a disability, I have faced challenges or witnessed prejudices through several lenses, allowing me to view the world through social constructs. I hope this dissertation will do some justice for empowering voices of the minority community in technology to help businesses, governmental bodies, and relevant stakeholders in power to consider avenues to mitigate trust in machine learning (ML) for everybody.
This research project investigates how biases in machine learning (ML) systems affect public trust in both current and future ML applications. It delves into how an individual's background—such as age, ethnicity, gender, and level of knowledge—shapes their perception of ML biases. This study addresses a critical gap: key stakeholders in ML development often lack sufficient motivation to prioritize transparency, safety, and accountability over profit. Building trust in ML requires that stakeholders understand the implications of ML biases.
To address this, I conducted a mono-method qualitative analysis through semi-structured interviews with 16 participants, primarily from ethnic and gender minority backgrounds in the tech industry. The diverse sample highlighted unique perspectives, with 12 out of 16 participants reflecting varied combinations of age, ethnicity, gender, knowledge level, and origin. The findings underscored that individuals' trust in ML is influenced not only by the technical capabilities of ML systems but also by the profiles and intentions of the stakeholders behind ML development.
For future ML systems, participants emphasized the need for technical improvements and inclusivity for minorities, expressing concerns over the motives of those responsible for creating these systems. Trust levels were shown to be affected by personal characteristics, lived experiences, exposure to discrimination, and individual identity.
Recommendations and Future Directions: The study suggests that future research should incorporate interpretive perspectives and diverse, minority views to deepen understanding of ML trust dynamics. Replication of this study with theoretical sampling based on age and the use of high-quality ML benchmarks inclusive of minority perspectives are recommended to strengthen future findings.
This research contributes to both theoretical and practical literature by illuminating how deeply ML bias impacts trust and offering pathways for ML stakeholders to address these biases effectively.
Presented findings to Industry including Jaguar Land Rover
Developer Groups, raising awareness of ML bias and providing actionable
recommendations for developing trustworthy AI systems











