Your filtered results are below:
Ethical AI and Computational Creativity in Human Creative Spaces
Charlotte's work will focus on AI ethics in creative spaces, such as the interdisciplinary discussions about computational creativity as a tool for enhancing AI ethics, generative models, and human-algorithm collaboration.
Technologically Mediated Phronesis and the Necessity for Mindful Design
Andrew is interested in understanding the psychological and sociological effects technology has on moral reasoning and character and hopes to provide a framework to better understand these connections.
Information Extraction and Reasoning in Legal Texts
This research focuses on integrating knowledge into language models and enhancing their reasoning abilities. Specifically, exploring legal information extraction as a foundational step in advancing natural language processing for the legal domain.
Navigating the Complexities of Artificial Moral Advisors in the Prospect of AI Moral Enhancement: A Moral Psychology Perspective
Yuxin’s project explores the complications underlying the concept of AI moral enhancement – the process of attempting to improve human moral capacity through external, technological means.
Fair AI
This project explores the responsible usage of AI, in particular, learning to identify and mitigate bias and algorithmic (un)fairness. It looks to prevent the potential reinforcement and amplification of harmful existing human biases with applications to credit access and the financial industry.
Who’s Responsible Anyway? Contextualizing bottom-up and top-down approaches to governance innovation in the age of AI
Bhargavi’s research aims to understand the role of regulatory, organizational, and professional modes of governance in bridging potential accountability ‘gaps’ generated by emerging technologies like AI.