[iva] CfPs IEEE VR 2023 Workshop MASSXR: Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality (Submission deadline: January 9)

Yumak, Z. (Zerrin) Z.Yumak at uu.nl
Mon Dec 19 09:58:43 CET 2022


IEEE VR 2023 Workshop on Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality (MASSXR)

Location and date

The workshop IEEE-MASSXR will take place during the 30th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2023) conference, which will be held from March 25-29, 2023, in Shanghai, China.

IEEE-MASSXR is a half-day workshop, and it will be held online on 25th March. For more information, please visit the workshop's website:

https://sites.google.com/view/massxrworkshop2023

Description

With the recent advances in immersive technologies such as realistic digital humans, off-the-shelf XR devices with capabilities to capture users' speech, faces, hands, and bodies, and the development of sophisticated data-driven AI algorithms, there is a great potential for automatic analysis and synthesis of social and affective cues in XR.  Although affective and social signal understanding and synthesis are studied in other fields (e.g., for human-robot interaction, intelligent virtual agents, or computer vision), it has not yet been explored adequately in Virtual and Augmented Reality. This demands extended-reality-specific theoretical and methodological foundations. Particularly, this workshop focuses on the following research questions:

  *   How can we sense the user's affective and social states using sensors available in XR?
  *   How can we collect users' interaction data in immersive situations?
  *   How can we generate affective and social cues for digital humans/avatars in immersive interactions enabled by dialogue, voice, and non-verbal behaviors?
  *   How can we develop systematic methodologies and techniques to develop  plausible, trustable, personalized behaviors for social and affective interaction in XR?

The objective of this workshop on Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality is to bring together researchers and practitioners working in the field of social and affective computing with the ones on 3D computer vision and computer graphics/animation and discuss the current state and future directions, opportunities, and challenges. The workshop aims to establish a new platform for the development of immersive embodied intelligence at the intersection of Artificial intelligence (AI) and Extended Reality (XR). We expect that the workshop will provide an opportunity for researchers to develop new techniques and will lead to new collaboration among the participants.

Scope

This workshop invites researchers to submit original, high-quality research papers related to multi-modal affective and social behavior analysis and synthesis in XR. Relevant topics include, but are not limited to:

  *   Analysis and synthesis of multi-modal social and affective cues in XR
  *   Data-driven expressive character animation (e.g., face, gaze, gestures, ...)
  *   AI algorithms for modeling social interactions with human- and AI-driven virtual humans
  *   Machine learning for dyadic and multi-party interactions
  *   Generating diverse, personalized, and style-based body motions
  *   Music-driven animation (e.g., dance, instrument playing)
  *   Multi-modal data collection and annotation in and for XR (e.g., using VR/AR headsets, microphones, motion capture devices, and 4D scanners)
  *   Efficient and novel machine learning methods (e.g., transfer learning, self-supervised and few-shot learning, generative and graph models)
  *   Subjective and objective analysis of data-driven algorithms for XR
  *   Applications in healthcare, education, and entertainment (e.g., sign language)

Important Dates

  *   Submission deadline: January 9, 2023 (Anywhere on Earth)
  *   Notifications: January 20, 2023
  *   Camera-ready deadline: January 27, 2023
  *   Conference date: March 25-29, 2023
  *   Workshop date: March 25, 2023 (Shanghai time TBD)

Instructions for Submissions
Authors are invited to submit a:

  *   Research paper: 4-6 pages + references
  *   Work-in-progress paper: 2-3 pages + references
Organizers:

  *   Zerrin Yumak (Utrecht University, The Netherlands)
  *   Funda Durupinar (University of Massachusetts Boston, USA)
  *   Oya Celiktutan (King's College London, UK)
  *   Pablo Cesar (CWI and TU Delft, The Netherlands)
  *   Aniket Bera (Purdue University, USA)
  *   Mar Gonzalez-Franco (Google Labs, USA)


--
Dr. Zerrin Yumak | Assistant Professor of Human-Centered Computing,
Founding member of the  Departmental Diversity Committee,
Director of the Motion Capture and Virtual Reality Lab |
Utrecht University | Department of Information and Computing
Sciences | Princetonplein 5, 3584 CC Utrecht | Buys Ballot Building room 4.20
|+31 30 253 7578 | z.yumak at uu.nl<mailto:z.yumak at uu.nl>
|www.zerrinyumak.com  | Twitter: @zerrinyumak

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-bielefeld.de/mailman2/unibi/public/iva-list/attachments/20221219/30707473/attachment-0001.html>


More information about the iva-list mailing list