Whisperproject.eu website uses the following types of cookies: browsing/session, functional, analytics and third party cookies
in order to provide a better browsing experience and to collect information about the website. Users can choose whether or not
to accept the use of cookies and access the website. By clicking on “Further Information”, the full information notice on the types
of cookies will be displayed and you will be able to choose whether or not to accept them whilst browsing on the website.





"Interacting with a Social Robot Affects Visual Perception of Space", the paper presented at HRI 2020 International Conference on Human-Robot Interaction, has been awarded with a Honorable Mention

 Human partners are very effective at coordinating in space and time. Such ability is particular remarkable considering that visual perception of space is a complex inferential process, which is affected by individual prior experience (e.g. the history of previous stimuli). As a result, two partners might perceive differently the same stimulus. Yet, they find a way to align their perception, as demonstrated by the high degree of coordination observed in sports or even in everyday gestures as shaking hands. Robots would need a similar ability to align with their partner's perception. However, to date there is no knowledge of how the inferential mechanism supporting visual perception operates during social interaction. In the current work, we use a humanoid robot to address this question. We replicate a standard protocol for the quantification of perceptual inference in a HRI setting. Participants estimated the length of a set of segments presented by the humanoid robot iCub. The robot behaved in one condition as a mechanical arm driven by a computer and in another condition as an interactive, social partner. Even if the stimuli presented were the same in the two conditions, length perception was different when the robot was judged as an interactive agent rather than a mechanical tool. When playing with the social robot, participants relied significantly less on stimulus history. This result suggests that the brain changes optimization strategies during interaction and lay the foundations to design humanaware robot visual perception.


Currently the ideas behind the project
have been presented in the following talks:

  • Keynote at the AIR project kick-out meeting - Skӧvde, Sweden, 19-20 March 2019. Title: “Robot & Humans: from mutual understanding to interaction and trust

  • Keynote at the Cognitive Science Arena 2019 - Bressanone, Italy, 15-16 February 2019. Title: “Future challenges in cognitive science: The new frontier of human robot interaction

  • Invited talk at The 1st International Symposium on Symbiotic Intelligent Ststems (SISReC) - Osaka, Japan, 25 January 2019. Title:"Intutitive understanding between humans and robots".

  • Invited talk at the series of events "Disegni, Invenzioni e Macchine"  ("Drawings, Inventions and Machines") at Palazzo Ducale, Genoa, Italy, 16th January 2019. Title: "Robots that help humans...to understand themselves" (Robot che aiutano… a comprendere l'uomo, in Italian).

  • Presentation "wHiSPER - Investigating Human Shared Perception with Robots" to the ISCIT (Istituto Superiore di Studi in Tecnologie dell'Informazione e della Comunicazione) students, Genoa, Italy, December 7, 2018.

  • Invited talk at the Festival Dell’Eccellenza al Femminile – Genoa, Italy, 20 November 2018. Title: “Communication in the robot era” (“Comunicazione al tempo dei robot”, in Italian).

  • Invited talk at the SICSA Workshop on Cyber Physical Systems – Edinburgh, UK, 16 November 2018. Title: “Mutual understanding for better human-robot interaction”.

  • Keynote at the World Usability Day – Turin, Italy, 8 November 2018. Title: “More Humane Robots” (“Robot piu’ umani”, in Italian).

  • Invited Talk at Skövde University, Skövde, Sweden - October 23, 2018. Title: “Action and Perception in HRI”.

  • Keynote at the workshop BODIS: The utility of body, interaction and self learning at IROS 2018 - Madrid, Spain, October 2018.


  • Book Chapter

    Di Cesare G., The importance of the affective component of movement in action understanding.
    Noceti N., Sciutti A., Rea F., Modelling Human Motion, Springer, Berlin Heidelberg - New York, 2020

  • Conference paper

    2020 Belgiovine G., Rea F., Zenzeri J., Sciutti A., A Humanoid Social Agent Embodying Physical Assistance EnhancesMotor Training Experience.
    ROMAN 2020 International Conference on Robot and Human Interactive Communication.

    Skilled motor behavior is critical in many human daily life activities and professions. The design of robots that can effectively teach motor skills is an important challenge in the robotics field. In particular, it is important to understand whether the involvement in the training of a robot exhibiting social behaviors impacts on the learning and the experience of the human pupils. In this study, we addressed this question and we asked participants to learn a complex task - stabilizing an inverted pendulum - by training with physical assistance provided by a robotic manipulandum, the Wristbot.

  • Workshop Paper

    2020 Barros P., Sciutti A, Bloem A. C., Hootsmans I. M., Opheij L. M., Toebosch R. H. A., Barakova E., It’s Food Fight! Introducing the Chef’s Hat Card Game for Affective-Aware HRI.
    Workshop on Exploring Creative Content in Social Robotics at the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2020).

    Emotional expressions and their changes during an interaction affect heavily how we perceive and behave towards other persons. To design an HRI scenario that makes possible to observe, understand, and model affective interactions and generate the ppropriate responses or initiations of a robot is a very challenging task. In this paper, we report our efforts in designing such a scenario, and to propose a modeling strategy of affective interaction by artificial intelligence deployed in autonomous robots.

  • Conference paper

    2020 Mazzola C., Aroyo A. M., Rea F., Sciutti A., Interacting with a Social Robot Affects Visual Perceptio of Space.
    HRI 2020, ACM/IEEE International Conference on Human-Robot Interaction.
    Download full-text here

    Visual perception of space is a complex inferential process, which is affected by individual prior experience. Nevertheless, human partners are very effective at coordinating in space and time with each other. Robots would need a similar ability to align with their partner's perception. A humanoid robot was used to investigate how the inferential mechanism supporting visual perception operates during social interaction and it has been verified that interaction with a social agent modifies perception of space and changes optimization strategies.

  • Workshop Paper

    2019 Gonzalez Billandon J., Grasse L., Sciutti A., Tata M., Rea F., Cognitive Architecture for Joint Attentional Learning of word-object mapping with a Humanoid Robot.
    Workshop on Deep Probabilistic Generative Models for Cognitive Architecture in Robotics, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019, Macao, China, November 8, 2019
    Download full-text here

    This work proposes a cognitive architecture supporting the establishment of a joint understanding of the world between the human and the robot, through the learning of object – word mappings during interaction.

  • Conference Paper

    2019 Barros P., Wermter S., Sciutti A., Towards Learning How to Properly Play UNO with the iCub Robot.
    The 9th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EPIROB 2019) Oslo, Norway
    Download full-text here

    Establishing shared understanding with others entails being able to comprehend and adapt to their status both at the individual and at the group level. In this position short paper we propose to develop a hybrid neural framework for learning proper affective responses with an iCub robot in a competitive group game scenario

  • Conference Paper

    2020 Barros P., Tanevska A., Sciutti A., Learning from Learners: Adapting ReinforcementLearning Agents to be Competitive in a Card Game.                   - ACCEPTED -
    ICPR International Conference on Pattern Recognition

    Learning how to adapt to complex and dynamic environments is one of the most important factors that contribute to our intelligence. Endowing artificial agents with this ability is not a simple task, particularly in competitive scenarios. In this paper, we present a broad study on how popular reinforcement learning algorithms can be adapted and implemented to learn and to play a real-world implementation of a competitive multiplayer card game.

  • Workshop Paper

    2020 Sciutti A., Sandini G., HCI - Human Centered Interaction
    SIGCHI Italy (ACM SIGCHI Italian Chapter)
    Download full-text here

    Future AI specialists should ideally be faced from the start of their career with the most critical challenge in developing technology designed for humans: the humans. The complexity of human cognition as a whole and of its social component in particular needs to be accounted for, if the developer aims at creating AI systems able to help and engage users effectively and for more than a few days. In our proposal we suggest 5 topics that according to us should be part of any syllabus used for teaching HCI skills to designers of AI interactive systems - and potentially also for those adopted in the context of HRI. This would help developers in creating technology considerate of humans, respectful of our needs and intrinsically caring of our wellbeing in the interaction.

  • Workshop Paper

    2020 Vannucci F., Di Cesare G., Rea F., Sandini G., Sciutti A., Expressive handovers: neural and behavioral effects of different attitudes in humanoid actions
    IEEE International Conference on Robotics and Automation

    Interacting with others requires the ability to evaluate their attitudes based on how actions are performed. Even a simple everyday act as handing over an object acquires a different meaning if it is performed gently or harshly. Concerning human-robot interaction, it is important to be aware of the impact that robot motion features might have in the partner’s interpretation of their actions. The challenge we address in this research is to endow the iCub humanoid robot with the capacity to communicate intuitively positive and negative attitudes in its own actions.
    Download full-text here

  • Conference Paper

    2019 Tanevska A., Rea F., Sandini G., Canamero L., Sciutti A., Eager to Learn vs. Quick to Complain? How a socially adaptive robot architecture performs with different robot personalities.
    IEEE International Conference on Systems, Man, and Cybernetics
    Download full-text here - Open Access Version 

    Robots that are aware of our needs (perceptual, motor or social) and adapt to them have the potential of creating a personalized and effective human-like interaction.  In this paper we explore how the very same adaptive architecture would be affected by different sets of parameters, i.e. by implementing the same adaptive framework with different robot personalities.

  • Conference Paper

    2019 Goyal G., Noceti N., Sciutti A., Odone F., The role of ego vision in view-invariant action recognition.
    The 4th International Workshop on egocentric perception, interaction and computing at CVPR 2019 Long Beach, CA
    Download full-text here

    A crucial step for machines to establish shared perception with their human partners is to understand how to map action understanding across different visual perspectives. In this extended abstract we explore affinities and differences between ego and allo (or third-person) vision and we leverage transfer learning in Convolutional Neural Networks to demonstrate capabilities and limitations of an implicitly learnt view-invariant representation for action recognition.

ERC Starting Grant
Principal Investigator: Alessandra Sciutti
Period: 3/2019- 2/2024. G.A: 804388

The content of this website is the sole responsibility of the authors. The European Commission or its services cannot be held responsible for any use that may be made of the information it contains.