Who’s Responsible Anyway? Contextualizing bottom-up and top-down approaches to governance innovation in the age of AI
Project dates (estimated):
Sep 2021 – Aug 2025
Name of the PhD student
Bhargavi Ganesh
Supervisors
Stuart Anderson – School of Informatics
Shannon Vallor - School of Philosophy, Psychology and Language Sciences (Philosophy)
Project aims:
Bhargavi’s research aims to understand the role of regulatory, organizational, and professional modes of governance in bridging potential accountability ‘gaps’ generated by emerging technologies like AI. To that end, her research embeds a conceptual, applied ethics approach within historical and empirical analyses of emerging top-down and bottom-up approaches to AI governance. Throughout her work, she investigates the notion of governance as a form of innovation—requiring the same process of design, testing, iteration and evaluation undertaken for other innovations.
This research has informed work within the UKRI Trustworthy Autonomous Systems node project on Governance and Regulation, as well as policy engagements with government departments in the UK and US.
Disciplines and subfields engaged:
Moral Philosophy
Critical Information Studies
Public Policy
Computational Social Science
Robotics and Artificial Intelligence
Research Themes:
Ethics of Algorithms
Algorithmic Transparency and Explainability
Algorithmic Accountability and Responsibility
Ethics of Human-Machine Interactions
Ethics of Automation
Emerging Technology and Human Identity
Emerging Tech and Human Autonomy
Related outputs:
Completed a 6 month internship at the Centre for Data, Ethics and Innovation (CDEI), now renamed the Responsible Technology Adoption Unit (RTA) within the Department of Science, Innovation, and Technology (DSIT).
‘The Historical Emergence of Policymaking in Response to Technological Advances: Lessons for AI Policy and Governance’. Accepted at Association for Public Policy and Management (APPAM) Fall Research Conference 2023 (non-archival)
Vallor, Shannon and Ganesh, Bhargavi (2023). ‘Artificial intelligence and the imperative of responsibility: reconceiving AI governance as social care.’ In Kiener, M. (Ed.) The Routledge Handbook of Philosophy of Responsibility. New York: Routledge, 395-406. 🔗
‘The Role of Governance in Bridging AI Responsibility Gaps: An interdisciplinary evaluation of emerging AI governance measures.’ In 2023 ACM Conference on AI, Ethics, and Society (AIES) proceedings. 🔗
Served on program committee for the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT).
Presented with Aditya Singh on responsibility in data and AI supply chains focusing on the role of data brokers on the Mobilising Technomoral Knowledge panel at the Society for Philosophy & Technology Conference 2023 in Tokyo.
Co-led workshop with Joe Noteboom on what it means to do interdisciplinary research at the AI Ethics & Society Doctoral Colloquium in Edinburgh (October 2022).