PhD Studentship Opportunity

Beyond clinical utility: ethical dimensions of the social roles of AI-generated health categories

The Edinburgh Futures Institute’s Centre for Technomoral Futures and Edinburgh Law School are delighted to invite applications for this PhD studentship, funded by Baillie Gifford, to begin in the academic year 2024/2025.

These studentships, which are open to UK, EU and international applicants, will support rigorous interdisciplinary PhD research into the ethical challenges posed by the growing use of data and artificial intelligence.

Deadline: 5pm Monday 18 December 2023

Supervisors:

  • Dr Emily Postan, Edinburgh Law School

  • Secondary supervisor, to be confirmed

The Project:

The project will address the ethical significance of new and reconfigured health-related categories - such as diagnoses, disease risk, or precision care profiles - generated by machine learning (ML), beyond their utility for their intended clinical purposes. It will explore the ways in which these ML-generated health categories do, or could, function as ways – for example – of identifying and distinguishing groups of people, or conceptualising health and (dis)ability, in wider social contexts. The project will take a normative approach to these questions. In doing so, it will not only identify and characterise particular social roles of ML-generated categories, but also why these roles are significant from ethical or social justice perspectives. For example, it could ask how these categories might:

  • contribute to addressing, or exacerbating, health inequalities;

  • affect social cohesion or solidarity;

  • reshape public health priorities, institutions, or environments; or

  • generate new, or deconstruct existing, axes of power, discrimination, or oppression.

It will consider how to weigh social impacts (such as those suggested in the indicative list above) alongside the intended or hoped-for health benefits of ML applications. And it will explore how associated risks and benefits might be managed through mechanisms such as the design, ethical oversight, or regulation of healthcare applications of ML.

This project will make explicit the need to look beyond existing dominant themes in health AI ethics, such as clinical reliability safety, explicability, and trustworthiness, to take in wider relational and social considerations. It will contribute to AI ethics more broadly, by offering a richer range of ways to think about what ‘good’ and ‘bad’ ML-generated categories might look like, and by highlighting ways in which domain-specific applications of AI could have socially pervasive ramifications. 

 
CTMF Admin