How AI-enabled systems raise novel liability/accountability issues and evidentiary challenges

 
headshot of Claudia González-Márquez, who has brown hair and is wearing a dark suit coat with a striped top

Claudia González-Márquez

Teaching and Research Fellow in Artificial Intelligence, Data, and the Rule of Law

 

Research Areas of Expertise:

AI governance and accountability; data protection and privacy; evidence and procedural fairness; cybercrime/cybersecurity law; and neurotechnology regulation; AI ethics

Research Summary:

Dr González-Márquez’s research sits at the intersection of AI governance, cybersecurity, evidence and procedural fairness, with a particular focus on how AI-enabled systems raise novel liability/accountability issues and evidentiary challenges. Claudia’s work aims to make these problems operational for institutions by translating complex doctrine into practical standards for responsibility, oversight, and evidence reliability.

Claudia’s doctoral work (University of Edinburgh) examined security vulnerabilities and malicious interference in neurotechnology systems as a case study for AI-enabled misuse, assessing the adequacy of UK–EU legal frameworks across tort, criminal, cybercrime, and data protection law. This work developed an accountability framework and a model legal response for unauthorised interference and resulting harms. Claudia’s work has been published in leading journals such as Law, Ethics & Technology, and in interdisciplinary legal and science conferences including Digital Humanism and BILETA.

Key Publications:

Next
Next

The way that AI technology mediates knowledge, understanding, and our practice of giving and receiving explanations