ETEG Organizers

  • Gabriela Arriagada Bruneau
  • Dilara Boga
  • Arzu Formánek
  • Ori Freiman
  • Zach Gudmunsen
  • Diego Morales
  • Fabio Tollon
PhD Researcher at University of Leeds

MSc in Philosophy, University of Edinburgh. BA in Philosophy, Pontific Catholic University of Chile.

I (she/they) am a fourth year PhD student in Philosophy of AI & Data Ethics at the University of Leeds (UK), Inter-disciplinary Applied Ethics Centre  & Leeds Institute for Data Analysis.

My main research interest is Data Ethics. My thesis develops an Ethical Framework for Bias and Fairness in Data Science, by conceptually re-engineering these notions. I am also interested in issues about explainability, interpretability, causal frameworks, and AI and feminism.

In my free time I like playing videogames and doing fitness training!

PhD Researcher at CEU

I (she/they) am a third-year PhD student in Philosophy at the Central European University (Vienna). 

I believe clarification of what is fundamental to human agency is crucial for greater understanding of ontological and moral status of AIs. In my dissertation, I will bring in philosophy of science, in particular the evolution of cooperation and sociality. I argue that cooperation is fundamental to moral agency. It is no surprise that cultures consider cooperative behaviour as 'morally good'. So, to be able to have social interactions with humans, AIs need to have moral expressions of themselves (such as social emotions) to signal cooperative or competitive behaviour; otherwise, AIs are likely to be perceived as just moral 'tools', rather than moral 'agents', and they cannot be trusted. The outcomes of AIs can be consistent and reliable but AIs cannot be responsible for these outcomes, since real trust between two moral agents requires their ability to 'choose'. I claim that current AIs lack the ability to 'choose' for themselves (what I call 'moral choice') because they do not have self-interest as a reaction to the collective interest.

In my free time, I love drawing and writing short stories. 

PhD Researcher at University of Vienna

BSc in Mathematics, MA in Philosophy

I’m doing my PhD in University of Vienna (in Forms of Normativity-FoNTI Project).

My research is on our social and moral relations to robots with a focus on our social cognition of them--from a cognitive scientific perspective.

I argue that there are certain sufficient reasons for social robots to be given an indirect moral standing (patiency, moral consideration). That is, it matters to us how we treat robots, although it doesn’t and cannot matter to robots. And I derive reasons for this indirect status from the nature of our social cognition, the way we come-to-be and maintain-to-be social beings and how this mechanism work in our cognition of sociality of robots.

My supervisor is Mark Coeckelbergh and more information about my research can be found here.

I enjoy sourdough baking and crocheting.

Post-Doctoral Fellow at the Ethics of AI Lab at the University of Toronto's Centre for Ethics. 

My current research deals with the theoretical foundations of collective epistemology. I argue against an inherent anthropocentric assumption that leaves AIs out of the epistemological and ethical analysis. Additionally, I study the concept of trust in the context of a 'trustworthy AI' and its relation to self-regulation practices and policymaking.


I have submitted my dissertation, The Role of Knowledge in the Formation of Trust in Technologies, to the Graduate Program in Science, Technology and Society at Bar-Ilan University. Before that, I gained professional experience as a researcher of governance mechanisms in distributed systems and completed an MA in Library and Information Studies. My academic education is rooted in analytic philosophy (BA). 


More information about me and my research can be found here.

PhD Researcher at University of Leeds

I am a fourth year PhD candidate supervised by Rob Lawlor and Graham Bex-Priestley. My Phd thesis focuses on AI ethics, specifically machine ethics.

 My research focuses on evolutionary approaches to designing artificial moral agents. Human moral judgements are the result of an evolutionary capacity – just as we evolved to see, co-operate and talk, we evolved to be aware of and pursue solutions to moral problems. I defend the idea that anyone wanting to design an artificial moral agent should be informed by evolutionary ethics. Effective artificial moral systems should be designed in a way that is sensitive to the connection between human morality and human evolutionary history. I believe that the best way to make moral artificial intelligences is to develop them using game-theoretic solutions to moral problems, and codify these solutions within the artificial system into a variable that approximates human ‘moral emotions’.

 I argue against the idea that artificial systems can be ‘full’ moral agents like humans are without evolutionary history. While the idea of a system that can provide moral solutions or beliefs without being evolutionary can be appealing and, on the face of it, offer simplicity, our current artificial system creation methodologies are insufficient to create non-evolutionary moral systems. Not only that, but they are unlikely to ever do so in the future, since any non-evolutionary input is likely to impinge on the agent’s independence from others, and therefore undermine their autonomy.

 This project is interdisciplinary and spans fields including philosophy of mind, applied ethics, metaethics, moral epistemology, and evolutionary theory. However, my approach is fundamentally a metaethically motivated one – since I believe that creating ‘full’ moral agents will, in the long run, help us to better understand the true nature of moral facts.

In my free time I like to play video games, read postmodernism and eat bazlama.

PhD Researcher at Eindhoven University of Technology (TU/e)

I am a Doctoral Researcher in the Philosophy & Ethics Group at Eindhoven University of Technology (TU/e). My main research interests include Epistemology, Philosophy of Mind, Philosophy of Artificial Intelligence, and their intersections. I particularly enjoy exploring deep questions about our minds, knowledge, and thinking capacities, and how answers to these issues inform our current research on AI. In addition to working in these fields, I have broad interests in Analytic Metaphysics, Philosophy of Science, and discussions about Metaphilosophy.

Before joining the Philosophy & Ethics Group at TU/e, I obtained a MA in Philosophy and Cognitive Studies from the Language and Mind Programme at the Università degli Studi di Siena, in Italy. Prior to that, I studied in my home country, Chile, where I obtained a BA in Philosophy and a BA in Law from the Pontificia Universidad Católica de Chile.

I my free time, you may usually find me playing video games, reading a novel, or binge watching series.

PhD Researcher at Bielefeld University

I am currently part of the research training group GRK 2073 "Integrating Ethics and Epistemology of Scientific Research", funded by the German Research Foundation.

I believe that the development of AI, as a nascent and disruptive technology, will fundamentally change our moral landscape. It is therefore essential that we have a robust meta-ethical grounding in our approach to the ethics of AI. 

My research focuses on the potential for technology to embody moral values, and whether these systems could be moral agents and/or moral patients, and what this could mean for scientists developing and using such systems. I am interested in advancing virtue ethics as a promising moral framework for evaluating the effects of these emerging technologies. 

In my free time I like to run around things and eat carbohydrates.