FEEL-COG: The Role of Affect in the Development of Cognition (2nd edition)
People are social beings, naturally predisposed to interact in an adaptive and empathetic manner with others. From birth, human beings develop their cognitive, affective, and social skills from the exposure to and interaction with their social group - family, friends, peers. As we grow and learn, we become capable to register the cognitive and affective state of our peers, adjust our actions and speech to their perceived needs, and over a prolonged period of interaction also learn which behaviors are the most appropriate and well-suited for each one of them individually.
This collection of abilities - perception, interaction, affect, reasoning, memory, motivation for action, learning, interaction, etc. - is what defines us as cognitive agents. Modelling these abilities into a cognitive architecture - either as a way of understanding human cognition or as an attempt to endow an artificial agent with cognitive skills - has been approached by researchers from many fields (computer science, cognitive psychology, robotics, philosophy, neuroscience), and has resulted in an impressive number of architectures. However, while the vast majority of them share some common model of cognition and recognise as crucial the elements of perception, attention, action selection, learning, memory and reasoning, very few of them acknowledge the role social and affective interaction plays in the development of these skills. Our workshop aims to tackle precisely this issue.
In FEEL-COG’s second edition, we wish to build on and go beyond the discussions raised in last year’s inaugural workshop, and continue the conversation on the key role of affect in the development of different cognitive abilities, both in natural human and non-human cognitive agents, and artificial ones (e.g. cognitive and social robots). We would like to address this issue from an interdisciplinary angle and we welcome researchers from a broad range of disciplines such as robotics, neurosciences, social sciences, psychology and computer science.
The ICDL 2022 conference policy requires everyone attending workshops or tutorials to register, including organizers, authors, and speakers. For more information for the workshop registration, refer to the ICDL 2022 registration page.
To ensure accessibility for attendees that cannot travel to London, we will also be streaming the workshop via Zoom. If you would like to register for online attendance, please fill this form by September 10th.
- 14:00 - 14.10: Welcome by the organizers
- 14:10 - 15:00: Keynote talk - Lorenzo Cominelli (University of Pisa, Italy) - The Influence of Emotions in Humanoid Behavior and for the Development of a Robotic Self
- 15:00 - 15:30: L. Carminatti (University of Genoa and Italian Institute of Technology, Italy) - Embodied Emergent Emotions in Cognitive Robots (contributed paper by L. Carminatti, A. Tanevska, A. Antunes, G. Sandini, V. Tikhanoff, F. Rea)
- 15:30 - 16:00: T. Kastendieck (Humboldt Universität, Germany) - Sensing Others - A Walk in the Park? Emotion Perception, Social Perception, Interpersonal Closeness and Emotional Mimicry in Response to Embedded Avatar Faces (contributed paper by T. Kastendieck, D. Huppertz, H. Mauersberger, U. Hess)
- 16:00 - 16:30: Coffee break
- 16:30 - 17:20: Keynote Talk - Pablo Barros (Sony, Belgium) - Who is Afraid of Non-Universal (Deep Learned) Facial Perception?
- 17:20 - 18.20: Panel discussion with all presenters - Modeling Affect in Cognitive and Developmental Systems: Challenges and the Way Forward
- 18:20 - 18:30: Closing remarks
The Influence of Emotions in Humanoid Behavior and for the Development of a Robotic Self
In this workshop, some fundamental principles of human emotional intelligence will be initially presented. We will discuss the influence of emotions in human decision-making and in the formation of consciousness following several neuroscientific findings and theory of mind. Then, we will see how to apply these theories on a cognitive architecture for social robots. We will focus on the somatic marker hypothesis of Antonio Damasio and an example of its implementation. Then, we will discuss the SEAI (Social Emotional Artificial Intelligence) cognitive architecture to see how it is possible to implement some characteristic human behavior in an expressive humanoid. This system is conceived as a hybrid deliberative/reactive architecture that based on information gathered by the environment can perform high-level reasoning on the perceived social and emotional context, and make the robot react accordingly. Some experiments conducted with the social robots FACE (Facial Automaton for Conveying Emotions) and Abel will be presented, to finally discuss how we can endow a robot with a specific personality and possibly generate some form of robotic self. The aim is that of modelling human cognition to improve the perception of interacting with an autonomous believable agent, therefore, an open discussion on what this could imply in HRI will be certainly triggered.
MD in Biomedical Engineering (2014) and PhD in Information Engineering (2018) both at the University of Pisa, Lorenzo Cominelli is a post-doc researcher at the “E. Piaggio” Research Center. His research is focused on AI and cognitive systems for social robots, with a particular attention to the influence of emotions in the robot decision-making. He has been responsible for the design and development of SEAI (Social Emotional Artificial Intelligence) – the FACE robot's cognitive architecture. Since 2020 he is working on the development of Abel, a hyper-realistic expressive robot born by the collaboration between the research center and the animatronics creator Gustav Hoegen. Abel will use SEAI and its upgraded versions for applications in medical settings (e.g., therapy for children with neurodevelopmental disorders, elderly people with neurodegenerative diseases) and for studies on cognitive robotics and human-robot interaction.
Who is Afraid of Non-Universal (Deep Learned) Facial Perception?
Facial Expression Recognition (FER) has become a popular topic within the hyper-active computer vision community, which has led to the development of a plethora of FER solutions easily accessible to the general public. In most cases, based on deep learned facial expression representations. Such solutions became the backbone of human-based interaction research, being used as means for human behavior analysis, the backbone for interaction-driven models, and one of the most fundamental blocks of proposed cognitive architectures. Most of these important research rely blindly on the objective performance of FER systems, and their capability to categorize a face, in most cases even on a frame-level, into one known and pre-determined emotional category. Once you actually understand how deep-learned FER models actually categorize faces, it is easy to see that trusting on their outputs might bias drastically all of the previously mentioned research areas. These models are trained mostly in a supervised task, where groups of pixels are pushed to compose a specific and pre-determined emotional category. In most cases, these affective labels are deeply connected to the scenario represented by the datasets these models were trained on, which changes drastically the interpretation of their FER results. Similarly to the recent advents on non-universal facial perception, understanding the context in which these models were trained might help to avoid a strong bias in their application on fundamental research, and help us to be more responsible in our claims and findings. The goal of this talk is to discuss the core of the problem of trusting blindly FER systems, and to foster a discussion on the importance of understanding their functioning. In this regard, I will present our most recent research on facial expression perception and hot we can address the biasing of affective categorization based on the non-universal perception theory, and how this can impact in future use of FER technology to other fields.
Pablo Barros is a machine learning scientist working at Sony R & D Center in Brussels, focusing on physiological signal processing for mental health solutions. Pablo holds a Ph.D. degree in computer science from the University of Hamburg, in Germany, and over the last years, has worked on different projects involving social robotics and affective computing. In particular, his research focuses on social perception, in particular facial expression recognition, but also on modelling, based on artificial intelligence, the role of social agents while interacting with humans.
Call for contributions
With the goal of sparking new conversations on the topic of affect and cognition, their interrelation, and their role in development in natural and artificial systems, we invite participants to submit their contributions as either an extended abstract (2-4 pages) or as a short position paper (1 page). In addition to papers reporting state-of-the-art research, we encourage authors to submit work in progress, reports with preliminary results, as well as critical reflections and position papers, so as to have the chance to identify any questions and discussion points they may wish to address during the panel.
Submissions should be sent in a PDF format to
- Abstract submission deadline (extended): July 25th August 1st, 2022
- Notification of acceptance: August 5th, 2022
- Camera-ready deadline: September 1st, 2022
- Workshop date: September 12th, 2022
Dr. Ana Tanevska is a postdoctoral researcher at the Italian Institute of Technology in Genoa (Italy), within the ERC-funded project wHiSPER, investigating shared perception between humans and robots. Ana obtained their PhD degree in Bioengineering and Robotics from the University of Genoa in March 2020, with the thesis "Towards a Cognitive Architecture for Socially Adaptive Human-Robot Interaction".
Prior to moving to Italy, Ana obtained a B.Sc. degree in Computer Science and Engineering (2015) and a M.Sc. degree in Intelligent Systems Engineering and Robotics (2016), both from FCSE (FINKI) in Skopje, Macedonia.
Ana's main research interests include cognitive robotics, adaptation in human-robot interaction (HRI), and socially-assistive robotics. Their work on socially-adaptive cognitive architectures has been most recently published in Frontiers in Robotics and AI.
Prof. Lola Cañamero is Chair of Robotics and Neuroscience at CY Cergy Paris University, where she is a member of the Neurocybernetics Group in the ETIS Laboratory. She was previously Reader in Adaptive Systems and Head of the Embodied Emotion, Cognition and (Inter-)Action Lab in the Department of Computer Science at the University of Hertfordshire in the UK, which she joined as faculty in 2001.
She holds an undergraduate degree (Licenciatura) in Philosophy from the Complutense University of Madrid and a PhD in Computer Science (Artificial Intelligence) from the University of Paris-XI, France. She turned to Embodied AI and robotics as a postdoctoral fellow in the groups of Rodney Brooks at MIT (USA) and of Luc Steels at the VUB (Belgium). Since 1995, her research has investigated the interactions between motivation, emotion and embodied cognition and action from the perspectives of adaptation, development and evolution, using autonomous and social robots and artificial life simulations. She has played a pioneering role in nurturing the emotion modeling community. She is author or co-author of over 150 peer-reviewed publications in the above topics.