A Responsibility Framework for Governing Trustworthy Autonomous Systems


Project dates (estimated):

Sep 2021 – Aug 2025


Name of the PhD student

Bhargavi Ganesh


Supervisors

Stuart Anderson – School of Informatics
Shannon Vallor - School of Philosophy, Psychology and Language Sciences (Philosophy)


Project aims:

This research develops a comprehensive responsibility framework to enable stakeholders involved in the design, development, and deployment of decision-making algorithms and autonomous systems to effectively govern and take responsibility for the outcomes of those systems in fields such as health, robotics, and finance. This research is part of the UKRI Trustworthy Autonomous Systems node project on Governance and Regulation, which will use the responsibility framework to inform governance tools and techniques for autonomous systems.


Disciplines and subfields engaged:

  • Moral Philosophy

  • Critical Information Studies

  • Public Policy

  • Computational Social Science

  • Robotics and Artificial Intelligence


Research Themes:

  • Ethics of Algorithms

    • Algorithmic Transparency and Explainability

    • Algorithmic Accountability and Responsibility

  • Ethics of Human-Machine Interactions

    • Ethics of Automation

  • Emerging Technology and Human Identity

    • Emerging Tech and Human Autonomy


Related outputs:

  • Completed a 6 month internship at the Centre for Data, Ethics and Innovation (CDEI), now renamed the Responsible Technology Adoption Unit (RTA) within the Department of Science, Innovation, and Technology (DSIT).

  • ‘The Historical Emergence of Policymaking in Response to Technological Advances: Lessons for AI Policy and Governance’. Accepted at Association for Public Policy and Management (APPAM) Fall Research Conference 2023 (non-archival)

  • Vallor, Shannon and Ganesh, Bhargavi (2023). ‘Artificial intelligence and the imperative of responsibility: reconceiving AI governance as social care.’ In Kiener, M. (Ed.) The Routledge Handbook of Philosophy of Responsibility. New York: Routledge, 395-406. 🔗

  • ‘The Role of Governance in Bridging AI Responsibility Gaps: An interdisciplinary evaluation of emerging AI governance measures.’ In 2023 ACM Conference on AI, Ethics, and Society (AIES) proceedings. 🔗

  • ​Served on program committee for the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT).

  • Presented with Aditya Singh on responsibility in data and AI supply chains focusing on the role of data brokers on the Mobilising Technomoral Knowledge panel at the Society for Philosophy & Technology Conference 2023 in Tokyo.  

  • Co-led workshop with Joe Noteboom on what it means to do interdisciplinary research at the AI Ethics & Society Doctoral Colloquium in Edinburgh (October 2022). 

  • Received Best Paper Award at We Robot 2022 conference 🔗

  • If It Ain’t Broke Don’t Fix It: Steamboat Accidents and their Lessons for AI Governance, Bhargavi Ganesh, Stuart Anderson and Shannon Vallor, We Robot 2022 Conference, 2022 🔗

  • UKRI Trustworthy Autonomous Systems Workshop, co-organised by Bhargavi Ganesh 🔗