Dr. Radu Uszkai, Anda Zahiu (University of Bucharest, Romania): You’ll never work alone: AI, robots, and the future of meaningful coaching in football

In 2020, Liverpool won their first Premier League title in 30 years, after being crowned kings of Europe just the previous season. While sport pundits worldwide credited their success to a great generation of players led by the likes of Virgil van Dijk and Mohamed Salah, they were also united in pointing out the pivotal role that Jürgen Klopp had on reshaping Liverpool’s philosophy of playing football. With Liverpool partnering with tech companies like Acronis and SkillCorner, Klopp is now in the forefront of the future of football coaching, one which seems to rely heavily on data analytics and Artificial Intelligence. In light of these new developments Serguei Beloussov, the CEO of Acronis, has even said that AI powered robots might make human coaches obsolete.

Prof. Gregory M. Reichberg (PRIO - Peace Research Institute Oslo, Norway): AI Applications in the Military Domain; Ethical Opportunities and Risks

AI-based technologies are assuming ever greater importance for militaries around the globe. Governments are investing heavily in these technologies, with the US and China leading the way. After explaining why military planners see such promise in AI, I briefly review the main types of technologies. Ethical reflection in this domain has been heavily focused on systems that are designed to replace human decision-makers in battlefield settings. Debates about Lethal Autonomous Weapon Systems (LAWS) – colloquially called “killer robots” – have grown heated and numerous voices have called for an international ban on their development and use. After reviewing the ethical arguments for and against the deployment of such weapon systems, I consider other AI military applications that have received far less attention, namely systems that are intended to augment human decision-making in battlefield settings. These systems likewise raise ethical challenges, which I discuss in the final part of my presentation. Reliance on AI for the making of life-and-death decisions raises significant issues that should not be ignored, and the attendant risks need to be better understood.

Dr. John Michael (BPP University, London, UK): The Sense of Commitment in Joint Action: Perspectives from Research on Human-Robot Interaction

In this talk I spell out the rationale for developing means of manipulating and of measuring people’s sense of commitment to robot interaction partners. A sense of commitment may lead people to be patient when a robot is not working smoothly, to remain vigilant when a robot is working so smoothly that a task becomes boring and to increase their willingness to invest effort in teaching a robot. Against this background I will present a theoretical framework for research on the sense of commitment in joint action, as well as a set of studies that have been conducted to probe various means of boosting people’s sense of commitment in human-human interaction and in joint actions with robot partners. I conclude by discussing the implications of this research for recent philosophical debates about the nature of joint action, and about the role of commitment in joint action.

Dr. Sebastian Krügel (Technical University of Munich, Germany): AI-Powered Moral Advisors

We investigate whether and how AI-powered algorithms can serve as moral advisors. In a series of online studies, we measure participants' propensity to choose between a more and a less ethical alternative in various scenarios (recruitment process, organ donation, criminal prosecution). Before making their decision, participants receive advice from a moral advisor who declares the unethical alternative to be either acceptable or unacceptable. We manipulate whether the advisor is human or an AI-based algorithm, as well as the information about the characteristics and/or functioning of the advisor. Our results show that advice from AI-based advisors is readily heeded to a comparable extent as advice from human moral advisors. The influence of moral advice seems to be quite robust to the specific configurations of the AI-based advisor. Interestingly, our participants even ignore information that clearly disqualifies the advisor when the advisor is an AI-powered algorithm, but not when the advisor is human.

Dr. Maki Rooksby (University of Glasgow, UK): Proxemic perception during virtual approach by NAO robot

What feels a comfortable and appropriate space between ourselves and another, is a nonverbal, and powerful element during social interaction. Research on social space, sometimes known as proxemics, suggests that the proximity between conversation partners may inform the nature or quality of a relationship between them. However, what feels an optimal distance seems to vary somewhat according to participants’ cultural backgrounds as well as other demographic profiles such as gender or height.

Prof. Ruud Hortensius (Utrecht University, Netherlands): How do real interactions with robots shape everyday social cognition?

While films, literature, and art have provided us with a rich depiction of the potential of robots, our understanding of actual interactions with robots remains limited. In this talk, I will explore not only how and when people form relationships with a robot but also how these new interactions shape distinct aspects of social cognition (for example, emotion, empathy, and Theory-of-Mind). Levering insights from socialising interventions, behavioural observations, inter-individual differences and neuroimaging, I will argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with robots that previously thought.

Dr. Philipp Kellmeyer (Albert-Ludwigs-Universität Freiburg, Germany): KEYNOTE SPEECH Trust in human-AI / human-robot interactions

Highly adaptive AI systems, social robots, closed-loop neurotechnology and other emerging digital technologies enable new forms of highly interactive human-machine interactions or even hybrid co-actions. These emerging kinds of interactions, especially in environments and contexts where safety and privacy are paramount, require an understanding of how we should conceptualize the relationship between humans and adaptive systems. In this talk, Dr Kellmeyer will explore conceptual foundations of trust in human-AI and human-robot interactions, discuss the problem of a "sociomorphic fallacy" in social robotics and propose potential design-based approaches to fostering trust in human-AI/-robot interactions.

Dr. Raul Hakli (University of Helsinki, Finland): Social interaction with robots

My talk is concerned with the possibility of social interaction with robots. A central aim of social robotics and human-robot interaction is to create robots that can be perceived as social agents and that can engage in social interaction with human beings. This seems problematic, however, because the term “sociality” does not seem to be readily applicable to artifacts like robots. Even if we were to allow taking robots as intentional agents, as in functionalism or Dennett’s intentional stance approach, social interaction is typically understood to take place between persons and to involve capacities that arguably are beyond robots. Contrary to the common way of talking within AI and robotics, I argue that robots are not autonomous agents. That is, they are not autonomous agents in the philosophical sense relevant to personhood or moral agency that requires fitness to be held responsible. This arguably imples that, strictly speaking, they are not capable of social interaction with humans that typically involves social commitments and other normative relations between participants. However, they can still be programmed to behave in ways that resemble cooperative social interaction and joint action. We can hence coordinate our actions with them by attributing to them certain social capacities. This is similar to the Dennettian intentional stance, but goes beyond it, into a social stance, which creates room for taking robots, for instrumental purposes, as social agents and partners in social interaction.

Dr. Niccolò Pescetelli (Max Planck institute for Human Development, Berlin, Germany): The interaction of human and machine biases in hybrid groups

Many modern interactions happen in a digital space, where automated recommendations and homophily can shape the composition of groups interacting together and the knowledge that groups are able to tap into when operating online. In this talk I will present two studies showing evidence that human and algorithmic behaviour interact together to produce emergent collective phenomena, such as positive vs. negative group performance. In the first study, group composition and modularity interact with search engines that people use to gather information online to solve real geopolitical forecasting tasks. In the second study, I use a hybrid transmission chain paradigm to show that social learning between machines and humans under uncertainty is limited, resulting in poor diffusion of innovations across a population.

Pantelis Analytis (University of Southern Denmark, Denmark): In vino veritas: Can wine recommender systems be more informative than renowned wine critics?

Critics and recommender systems mostly rival each other: Both influence people’s choices of, say, films, music, restaurants or wines. However, little is known about how the ratings of professional critics and amateurs compare and how they could be combined. To address these questions, we created a new collaborative filtering dataset, with ratings for wine labels from both renowned wine critics’ and regular wine consumers’ (amateurs), and used it to simulate the performance of a standard collaborative filtering algorithm. We studied how the k-nearest neighbor algorithm (k-nn) performs (both at the individual and aggregate level) when advice is drawn from critics and/or amateurs. We also formalized and visualized the social network spanned by k-nn by calculating how much a user is consulted by k-nn (potential influence) and how much a user can actually contribute to recommendations (i.e., has rated the target item; actual influence). We find that a system using both professional critics’ and amateurs’ ratings can substantially outperform systems relying on either of these groups alone. And even though there is strong evidence of taste homophily between professional critics and amateurs (i.e., critics should get advice from critics and amateurs from amateurs), critics exert more influence in the actual recommendations because they are more prolific raters. Our results provide a proof of concept for how critics’ and amateurs’ opinions can be harnessed to build robust recommender systems for wines, while our methods can be leveraged more generically to (i) make the recommendation process more transparent, (ii) identify influential users in recommender systems and (iii) investigate taste homophily in recommender networks and beyond.

Katsumi Watanabe (Waseda University, Tokyo, Japan): Explicit and implicit aspects of human-human and human-machine interactions

How do we humans and robots/AIs behave, feel, and interact in the complex worlds? While technologies and artifacts are to be understood in their specific natural, social, and cultural contexts, intelligent agents form a particular category in terms of both expectation toward and perception of them. First, they are thought to be (or expected to be) partly capable of exploring their environments and interacting with other objects, artifacts and living beings, namely, they have (semi-)agency. Also, they are thought to be (or expected to be) partly capable of interpreting and producing essential elements of communications., which would be based on or eventually lead to experience like those we feel, namely, they have (semi-)personal experience. Further, with or without agency and personal experience, they are thought to be (or expected to be) serve as social interfaces and mirror images of humans, which somehow drive us to produce robots and AIs that resemble human bodies or emulate characteristics of a human appearance and behavior. In this talk, I would like to illustrate both explicit and implicit aspects are important to understand human-human and human-machine interactions.

Silvia Milano (University of Oxford, UK): Evaluating recommender systems: from AI personal assistants to social planners

We interact with recommender systems on a regular basis when we use online services and apps. They collect, curate, and act upon vast amounts of data, shaping individual experiences of online environments and social interactions. In this talk, I will argue that a natural consequentialist approach to evaluating recommender systems encounters two problems: first, the actual stakeholders in a recommendation may differ from the system’s internal ontology (the individuation problem); and second, the interests of different categories of stakeholders may be hard to compare from a neutral perspective (the aggregation problem). I consider some strategies for solving these problems, concluding that the appropriate perspective from which we should evaluate recommender systems is that of a so-called social planner.

Ana Tajadura-Jiménez (University College London, UK): The multisensorial body in a technology-mediated world

Body perceptions are important for people’s motor, social and emotional functioning. Critically, current neuroscientific research has shown that body perceptions are not fixed, but are continuously updated through sensorimotor information. However, this research has mostly been conducted in controlled lab settings in which body movement is restricted, which hinders our knowledge of the effects of sensory cues on body perception in everyday functioning. With the emergence of full-body sensing technologies we are now able to track people’s body movements almost ubiquitously through a variety of low-cost sensors embedded in clothes and wearable accessories, and new possibilities to deliver sensory feedback while people are on-the-move are also rising. These come with new opportunities for research tools for investigating how multisensory processes shape body perception, and the impact on people’s motor, social and emotional functions in everyday contexts and needs, which can then inform real-life applications. In this talk I will present the work from our group on how sound feedback on one’s actions can be used to alter body perception. I will then present three studies from our current project aimed to inform the design of wearable technology in which sound-driven changes in body perception are used to enhance behavioral patterns and emotional state in the context of exertion. I will conclude by identifying new opportunities that AI would bring to this line of work.

Pii Telakivi (University of Helsinki, Finland): AI-extenders and moral responsibility

According to the hypothesis of Extended Mind, the constitutive basis of certain cognitive capacities is not restricted inside the bodily boundaries, but it can extend to include devices and tools. So far, research has focused on “traditional” tools, such as notebooks or pen and paper. However, instead of “basic extenders”, I will focus on externalisations based on AI technology. Following Hernández-Orallo & Vold (2019), I call them AI-extenders – cognitive extenders that use AI technology and that are tightly coupled with the human agent so that a hybrid system is created. I will examine how an AI-extender might be relevant for moral agency and responsibility by going through examples where they either enhance or reduce their user’s moral agency.