The way that AI technology mediates knowledge, understanding, and our practice of giving and receiving explanations
Emily Sullivan
Co-Director, Centre for Technomoral Futures
Research Areas of Expertise:
Ethics and Philosophy of AI and Machine Learning, Philosophy of Science, Epistemology
Research Summary: Emily’s research explores the ways that AI technology mediates knowledge, understanding, and our practice of giving and receiving explanations. Her work focuses on the way that AI changes how we do science and use scientific models in society, which calls us to rethink existing philosophical frameworks surrounding the nature of explanation, idealization, and scientific understanding. At the same time, she considers how normative issues in ethics and epistemology shape the answers to these questions. Emily’s work has been published in philosophy journals, such as the British Journal for Philosophy of Science, Australisain Journal of Philosophy, Philosophical Studies, and more. Her work has also been published in interdisplinary computing conferences, such as the ACM Conference Fairness, Accountability, and Transparency (FAccT) and the AAAI/ACM Conference AI, Ethics and Society (AIES).
Funded Research Projects (Active):
Machine Learning in Science and Society: A Dangerous Toy? (2025-2030)
European Research Council Starting Grant
Key Publications:
Sullivan, E. (2025, forthcoming) “Value encroachment on scientific understanding and discovery” Philosophhical Studies.
Sullivan, E. (2024) “SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI” In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT’ 24) 🔗
Sullivan, E. and Kasirzadeh, A. (2024) “Explanation Hacking: The perils of algorithmic recourse” in Philosophy of Science for Machine Learning: Core Issues and New Perspectives eds. Juan Durán and Giorgia Pozzi, Synthese Library, Springer.
Grote, T, Gennin, K., and Sullivan, E. (2024) “Reliability and Machine Learning” Philosophy Compass 9(5), e12974.
Sullivan, E. (2023) Do Machine Learning Models Represent their Targets? Philosophy of Science 1-14. 🔗
Sullivan, E. (2022) Inductive Risk, Understanding, and Opaque Machine Learning Models. Philosophy of Science, 1-13. 🔗
Sullivan, E. (2022) Understanding from Machine learning Models. British Journal for the Philosophy of Science. 73(1): 109–133🔗
Sullivan, E and Verreault-Julien, P. (2022) From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. pp 712–722. 🔗
Lawler, I., & Sullivan, E. (2021). Model explanation versus model-induced explanation. Foundations of Science, 26(4), 1049-1074.🔗
Sullivan, E., Sondag, M., Rutter, I., Meulemans, W., Cunningham, S., Speckmann, B., & Alfano, M. (2020) “Vulnerability in Social Epistemic Networks?” International Journal for Philosophical Studies 28(5): 731-753.
Sullivan, E., Sondag, M., Rutter, I., Meulemans, W., Cunningham, S., Speckmann, B., & Alfano, M. (2020) “Can Real Social Epistemic Networks Deliver the Wisdom of Crowds?” Oxford Studies in Experimental Philosophy, Volume 3. Eds Tania Lombrozo, Joshua Knobe, and Shaun Nichols. Oxford University Press
Sullivan, E., & Khalifa, K. (2019) Idealizations and Understanding: Much Ado About Nothing? Australasian Journal of Philosophy 97(4), pp. 673-689. 🔗