The Vulnerability Gap: Responsibility and Moral Agency in Socio-Technical Systems
Project dates (estimated):
September 2024 - August 2028
Name of the PhD student:
Harry Weir-McAndrew
Supervisors:
Professor Tillmann Vierkant – School of Philosophy, Psychology & Language Sciences
Professor Shannon Vallor – School of Philosophy, Psychology & Language Sciences
Project aims:
The proliferation of artificial intelligence and autonomous systems (AI/AS) presents novel challenges to our established practices of moral responsibility and agency. This project addresses a core dimension of this challenge by focusing on what Vallor & Vierkant (2024) term the "vulnerability gap": a critical imbalance where, in our interactions with AI/AS, those affected by these technologies become increasingly exposed to potential harms, while the human creators and deployers are often shielded from the direct interpersonal accountability that typically characterises human moral relationships. This gap refers to a critical asymmetry in vulnerability within our interactions with and through AI/AS. While human users and affected parties are increasingly exposed to potential harms, the AI/AS themselves (as non-sentient artefacts) are incapable of reciprocal affective engagement, and the human actors involved in their creation and deployment are often systemically insulated from direct, vulnerable moral address.
This research argues that the vulnerability gap identifies a distinct and critical set of challenges for moral responsibility concerning AI/AS, focusing on relational and affective dimensions that traditional "responsibility gap" framings—centred on epistemic or control deficits—fail to capture. While AI/AS can exacerbate issues of knowledge and control, the inability of these systems, and often the diffuse human networks behind them, to participate in the mutual, affectively charged exchanges that underpin human moral learning and accountability poses a distinct and profound challenge. This relational deficit impedes the cultivation of responsible agency among the human stewards of technology and can lead to serious societal consequences, including what this research identifies as technomoral alienation—an estrangement from the human origins of technological harm and a breakdown in social trust.
Drawing on insights from moral philosophy (particularly scaffolded agency theories), phenomenology, the philosophy of technology, and critical social theory (e.g., Weber, Arendt on bureaucratic rationality), this project undertakes a high-level philosophical investigation with the following aims:
To articulate the Vulnerability Gap’s philosophical foundations, its distinct characteristics in the context of AI/AS, and its systemic impact on our moral ecology.
To examine how this gap contributes to phenomena such as technomoral alienation, diminished accountability, and the erosion of conditions for moral learning and development within socio-technical contexts.
To demonstrate how a focus on the Vulnerability Gap provides a more incisive lens for understanding challenges in AI governance than traditional "responsibility gap" formulations.
To explore conceptual and normative strategies for re-establishing conditions for meaningful human accountability by fostering socio-technical environments that encourage a greater degree of reciprocal moral address and sensitivity to human vulnerability.
To translate these philosophical insights into actionable recommendations for policymakers and industry, informing the development of governance frameworks, ethical guidelines, and design practices that work actively to bridge the Vulnerability Gap, thereby supporting human moral agency and flourishing.
This research seeks to provide a deeper understanding of the relational dynamics at the heart of AI ethics. By focusing on the conditions necessary for human moral responsibility to thrive, the project aims to contribute to a more robust and ethically sound integration of advanced technologies into society, offering pathways towards a future where innovation is aligned with human values and accountability.
Disciplines and subfields engaged:
Moral Philosophy (philosophy of responsibility, moral psychology, AI ethics, theories of agency)
Phenomenology (moral experience, intersubjectivity, social ontology)
Philosophy of Technology
Science and Technology Studies (socio-technical systems, critical perspectives on AI)
Social and Political Philosophy (theories of accountability, critiques of power and institutions)
Social Epistemology
Research Themes:
Emerging Technology and Human Identity
Al, Automation and Human Wisdom
Emerging Tech and Human Autonomy
Ethics of Algorithms
Algorithmic Justice, Power, Freedom and Equity
Bias and Discrimination in Machine Learning
Algorithmic Accountability and Responsibility
Ethics of Human-Machine Interactions
Ethics of Automation
Ethics of Artificial Agent and Robot Design
Ethics of Affective and Social Technologies
Emerging Technology and Human Identity
Al, Automation and Human Wisdom
Emerging Tech and Human Autonomy
Emerging Technology, Health and Flourishing
Emerging Tech and Human Flourishing