Work in Progress Sessions

WIP sessions are hold upon request for people to present their ideas and get feedback. Keep in mind that this does not have to be a finished/structured paper. You can present a sketch of an idea for a chapter or a concept you are troubled with!

​IF YOU'D LIKE TO GIVE A PRESENTATION*, please fill out this form!

*We request that presenters then try to attend as many sessions as they can, to promote collaboration and consistency in the feedback.

 Upcoming/Previous WIP sessions:

 

Session 6: November 10, Wednesday (17:30 - 19:00 CET)

Presenter: Ori Freiman (University of Toronto)

"Ethical and Epistemological Roles of AIs in Collective Epistemology"

Abstract: Collective epistemology is the branch of social epistemology that deals with the epistemic states of groups, collectives, and corporate agents. Despite the central role AI technologies play in the generation and transmission of knowledge, analytic philosophical theory has largely overlooked the role of technologies in general, and AIs in specific, in collective epistemic phenomena such as group knowledge and group belief. First, I argue that this is due to an anthropocentric assumption - that only humans can be considered as members of an epistemic group. I identify this assumption in the main debates within collective epistemology and show that all sides, in all these main debates, hold the anthropocentric assumption. Second, I vigorously argue against the anthropocentric assumption, since it prevents the inclusion of technological artifacts in groups, despite their influence on the epistemic and ethical outcomes of the group. Third, I rethink conditions of membership in a collective, and suggest a compelling alternative: that membership of an epistemic group is, in principle, open to anyone, and moreover – anything – such as non-human epistemic agents (i.e. technologies), that shape the epistemological and ethical outcomes of a group. Fourth and last, I utilize the suggested alternative, and propose an account that assigns epistemological and ethical responsibility to a hybrid collective - as a group, and within a hybrid collective - to its individual members.

 
Session 5: October 20, Wednesday (17:30 - 19:00 CEST)

Presenter: Dilara Boga (Central European University, Vienna)

"Artificial Moral Agency: Why machines need social emotions"

Abstract:

In recent years, the discussion on the criteria of moral agency for robots has become one of the significant issues for the future of artificial intelligences (AIs). Although there is no consensus on which properties are sufficient to make robots moral agents, many agree that robots need certain properties to be proper moral agents (such as consciousness, autonomy, sentience, etc.). However, it is interesting enough that there is no substantial account that claims that what robots lack, related to morality, is emotions. Here, I will claim that robots lack, in particular, social emotions. Based on the theory of evolution of cooperation, I will claim that sociality is fundamental for morality. During the process of group forming and social communication, many cultures (if not all) consider cooperative behaviours ‘morally good’ or ‘ethical’ or ‘good for humankind’ because cooperative actions resolve conflicts or problems. The moral agent in a collective has a moral choice to cooperate or compete based on the tension between her self-interest against the group-interest. Social emotions play a role in the moral choice of the moral agent when this specific tension occurs. To be able to have social interactions with humans as ‘moral equals’, to make moral decisions, robots need to have social emotions; otherwise, they are moral ‘tools’, rather than moral ‘agents’.

 

Session 4: September 29, Wednesday (17:30 - 19:00 CEST)

Presenter: Zach Gudmunsen (Interdisciplinary Ethics Applied Centre, University of Leeds)

“Missing Ingredients in Artificial Moral Agency”

Abstract:

Most people think that artificial systems aren’t ‘full’ moral agents like humans are. If that’s so, we ought to be able to isolate what artificial systems would need to be ‘full’ moral agents. I consider some recently proposed candidates of that ‘missing ingredient’: consciousness, life history and rationality. I argue that none are particularly convincing and propose ‘autonomy’ as an improved candidate. Autonomy has a messy history, but with clarifications seems to be a good approximation of what artificial systems need to be ‘full’ moral agents.

 

Session 3: Wednesday 21st of July 2021 - 17:00-18:30 CEST

Presenter: Ann-Katrien Oimann - KU Leuven

"Lethal autonomous weapon systems and the responsibility gap: why command responsibility is (not) a solution"

Abstract:

Modern weaponry is constantly looking at ways to generate maximum damage to targets while minimising the risk for the operator. As a result, there has been a rise in the use of semi-autonomous systems and research into fully autonomous systems. In recent years, both in the legal sphere as in philosophy attention has been paid to the difficulty of allocating responsibility and liability in the case of errors made by LAWS. Some authors even argue that the increasing level of autonomy in weapon systems will lead to so-called 'responsibility gaps'. In order to close this gap, very different solutions have been devised. One solution that is increasing in popularity and is being discussed by both philosophers and legal scholars is the doctrine of command responsibility. The aim of this paper is to contribute to the ongoing debate on attributing responsibility for serious violations of IHL by LAWS by reviewing whether the doctrine command responsibility could offer a solution. I will argue that the requirement of a superior-subordinate relationship will be the decisive factor for the success of the analogous application of the doctrine of command responsibility to LAWS.

 
Session 2: Wednesday 30th of June 2021 - 17:00 -18:30 CEST 

​Presenter: Fabio Tollon - Bielefeld University

"From Responsibility Gaps to Responsibility Maps"

Abstract:

When it comes to socially and politically important issues, it is important that we hold the guilty parties responsible for the harm they inflict. However, it is also essential that the means by which they are held responsible is fair. In the case of manufacturers or engineers, it is normally thought that us holding them responsible should meet certain conditions of knowledge (foresight), control, and intention. Conversely, harmed parties, should feel as though justice has been done, and that those responsible for the harm have been held to account, sanctioned, or fined, etc. However, some aspects of AI systems might call in to question whether it is indeed fair to hold engineers or manufacturers responsible in this way. Some claim that AI will complicate our ascriptions of moral responsibility, creating a so-called “responsibility-gap”. In this paper I will argue that there are no backward-looking gaps in responsibility due to AI. The senses of responsibility I discuss are responsibility as liability, attributability, answerability, and accountability. I go through each of these four potential senses of responsibility, and in each case show that while AI does indeed complicate our ability to hold agents morally responsible, it does not undermine our ability to do so.

​​

​Session 1:  Wednesday 9th of June 2021 - 17:00 -18:30 CEST 

Presenter: Zachary Daus - University of Vienna  

"Vulnerability, Trust and Human-Robot Interaction"

Abstract:

​Recent attempts to engineer trustworthy robotic systems often conceive of trust in terms of predictability. Accordingly, to trust a robotic system (or a human) is to be able to predict what the robotic system (or human) will do. Design elements of robotic systems that seek to engender trust thus often focus on strategies such as making decision procedures transparent, replicating human movements, and developing trust-building training programs. I argue that all of these design strategies for engendering trust overlook a significant condition for trustworthiness: mutual vulnerability. Humans trust one another not merely as a result of being able to predict the actions of the other, but as a result of being mutually vulnerable to similar risks. Co-workers, for example, trust each other not merely because they can predict each other’s actions, but because both are mutually vulnerable to the consequences of the potential failure of their joint work project. The necessary condition of mutual vulnerability for trustworthy relations poses a significant obstacle to the establishment of trustworthy human-robot interaction. This is because robotic systems lack the affective intelligence that is necessary to be vulnerable. Despite the problems posed by mutual vulnerability for the achievement of trust in human-robot interaction, I will nonetheless propose potential solutions. These solutions will center around bringing users and creators of robotic systems into greater interaction, so that users of robotic systems can recognize the vulnerability of the creators of robotic systems and how this vulnerability is tied to the success (or failure) of the robotic systems they are using. Keywords: vulnerability, trust, human-robot interaction.