Fair AI


Project dates (estimated):

 Sep 2020 – Aug 2024


Name of the PhD student:

Savina Kim


Supervisors:

Galina Andreeva – Business School
Michael Rovatsos – School of Informatics


Project aims:

This project explores the responsible usage of AI, in particular, learning to identify and mitigate bias and algorithmic (un)fairness. It looks to prevent the potential reinforcement and amplification of harmful existing human biases with applications to credit access and the financial industry.


Disciplines and subfields engaged:

  • AI and Data Ethics

  • Algorithmic Impact and Responsibility

  • Explainability and Interpretability in Machine Learning

  • Fairness, Bias and Discrimination in Machine Learning

  • Finance and Fintech

  • AI Auditing


Research Themes:

  • Ethics of Algorithms

    • Unfair Bias and Discrimination in Machine Learning

    • Ethics of Algorithmic Decision-Making

    • Algorithmic Accountability and Responsibility

  • Ethics of Human-Machine Interactions

    • Ethics of Automation

  • Ethics and Politics of Data

    • Ethics of Data Science and Data Practice


Related outputs: 

  • ​The Double-Edged Sword of Big Data and Information Technology for the Disadvantaged: A Cautionary Tale from Open Banking, Conference paper and presentation, Credit Scoring and Credit Control Conference, Sept. 2023.

  • Fair Models in Credit: Intersectional Discrimination and the Amplification of Inequity, Conference paper and presentation, Credit Scoring and Credit Control Conference, Sept. 2023.

  • What is Algorithmic Bias and Why Should We Care?, Bailey Kursar and Savina Kim, Smart Data Foundry (2022) 🔗

  • Algorithimis Bias: What is it and what can we do about it? (Smart Data Foundry online panel discussion), 2022 🔗