The study of gesture, facial expression, signals or other aspects related to verbal and non-verbal language have applications in a multitude of fields. For example, they can contribute to the development of technological and artificial intelligence applications that are based on human language patterns, to the detection of learning disorders among children or to the improvement of language learning methodologies and other communication skills in the classroom. These are some of the issues that will be discussed at the 1st International Multimodal Communication Symposium ( MMSYM ), which will take place in the Auditorium of the UPF Poblenou campus between Wednesday 26 and Friday 28 April.
The organizers of this 1st International Symposium on Multimodal Communication are the GrEP (Prosodic Studies Group) of the UPF Department of Translation and Language Sciences and the GEHM (Gestures and Head Movements in Language) network of the University of Copenhagen. This network involves eight European research groups on the subject.
Pilar Prieto , coordinator of the GrEP (ICREA-UPF) assures: "we believe that a multimodal perspective of human language that integrally incorporates voice and body is crucial to realize the great importance that these components have for communication and learning".
Patrizia Paggio, coordinator of the GEHM at the University of Copenhagen adds: "The First International Multimodal Communication Symposium MMSYM 2023 is an important multidisciplinary forum for researchers working in the field of multimodality, which refers to the way in which speech and visual modalities of human communication are used, as well as human-machine interaction. This area of research is growing in popularity and has applications in many areas, such as developing communication skills, language teaching, tutoring systems and artificial agents".
More than 130 researchers participating in the symposium, including internationally renowned speakers
More than 130 researchers from all over the world have registered for this international symposium, to which internationally renowned speakers have also been invited. One of the guest speakers is Catherine Pelachaud , research director of the French National Centre for Scientific Research ( CNRS ), attached to the Institute of Intelligent Systems and Robotics ( ISIR) of the Pierre et Marie campus of the Sorbonne University. Pelachaud specializes in computational science, human-machine interaction and the development of virtual assistants and facial recognition technologies, among other issues. Other speakers invited to the symposium are Alan Cienki , a professor of Linguistics and Cognition and of English Linguistics at Vrije Universiteit Amsterdam (VU Amsterdam) and coordinator of the Amsterdam Gesture Center; and Jelena Krivokapic , a professor of Linguistics at the University of Michigan. Their fields of research include prosody, gesturality and planning in language production.The thematic threads of the symposium: fields of research and the application of multimodal communication
The MMSYM is an interdisciplinary symposium targeting researchers and experts in different aspects of multimodality in human communication and human-machine interaction. Multimodal communication is understood as communication encompassing the different means we use to transmit or receive messages and relate to each other, either verbally or non-verbally.At the symposium, research will be presented regarding three major lines. First, research will be shared on the specific characteristics of language when gesture and speech interact, according to the language profile of each of the interlocutors. Secondly, research will be presented on how to improve the effectiveness of speech through multimodal communication, combining body movement with prosody (or intonation). Thirdly, it will present research on conceptual and statistical models to analyse data related to multimodal communication, with special regard to head movements and the use of the gaze.
Within the framework of the three previous thematic threads, topics such as the following will be dealt with:
- Multimodal communication in the interaction between humans and machines, for example through conversational agents.
- Artificial intelligence machine learning systems that can be applied to analyse data on multimodal communication
- Recognition systems and automatic interpretation of different modes of communication (for example facial recognition).
- The relationship between multimodal communication and language acquisition in children and what strategies can be followed to improve their learning process, to detect developmental disorders and to prevent or treat them.
- Modal communication systems for people who use sign languages.
- The application of multimodal communication in the health sector.
- The intercultural aspects of multimodal communication