[iva] Call for Papers - Journal on Multimodal User Interfaces (JMUI) by Springer Nature

Jean-Claude Martin jean-claude.martin at universite-paris-saclay.fr
Sun Oct 5 17:44:05 CEST 2025


Dear researcher on multimodal interaction,


I am writing to invite you to submit articles to the Journal on Multimodal User Interfaces (JMUI), for which I serve as Editor-in-Chief.
Several examples of articles published in JMUI can be found at the end of this email.

JMUI is published by Springer.
Its 2024 impact factor is 2.1.
It offers a hybrid publication model (open access or not).
You can find out more about JMUI, including open access articles, on the journal website:
https://link.springer.com/journal/12193.

We welcome submissions related to the following topics:
- Design, modelling and evaluation of multimodal interactions
- Multimodal input processing
- Virtual agents
- Multimodal perception
- Affective computing and the perception and expression of emotions
- Innovative use of non-verbal modalities (e.g. gaze, haptics)
- Multimodal corpora and user studies
- Software architectures for multimodal interactions
- AI and LLMs for multimodal interaction
- Applications of multimodal interaction (e.g. health, education, accessibility, automotive, robotics)
- Ethical dimensions of multimodal interaction

If you are interested in publishing with us, please submit your manuscript through our online submission system.
You can find detailed guidelines and instructions on our journal website: https://link.springer.com/journal/12193.

Please note that JMUI publishes several types of articles, such as review articles, original articles, and short research communications: https://link.springer.com/journal/12553/submission-guidelines#Instructions%20for%20Authors_Types%20of%20Papers.

JMUI also publishes special issues.
These can include extended versions of workshop and conference articles (including at least 30% new material).  If you are interested in guest editing a special issue, please feel free to contact me.

You will find information about some special issues and published articles below.

Best regards,

Jean-Claude Martin
Editor-in-Chief of the Springer Journal on Multimodal User Interfaces (JMUI)
Professor, Université Paris-Saclay, LISN/CNRS, France

-----------------
Some Special Issues hosted by JMUI

Special Issue: Design and Perception of Interactive Sonification
Guest Editors: Tim Ziemer, Sara Lenzi, Niklas Rönnberg, Thomas Hermann, Roberto Bresin
Volume 17, Issue 4, December 2023
https://link.springer.com/journal/12193/volumes-and-issues/17-4

Special Issue: Multimodal Interfaces and Communication Cues for Remote Collaboration
Guest Editors: Seungwon Kim, Kangsoo Kim,Mark Billinghurst
Volume 14, Issue 4, December 2020
https://link.springer.com/journal/12193/volumes-and-issues/14-4

Special Issue: Virtual Agents for Social Skills Training
Guest Editors: Merijn Bruijnes, Jeroen Linssen, Dirk Heylen
Volume 13, Issue 1, March 2019
https://link.springer.com/journal/12193/volumes-and-issues/13-1

-----------------
Some articles published in JMUI

Tanaka, H., Lisitsyna, A. Web-based multimodal learning system to develop social communication skills. J Multimodal User Interfaces 19, 271–284 (2025). <https://doi.org/10.1007/s12193-025-00460-5> https://doi.org/10.1007/s12193-025-00460-5

Basille, A., Lavoué, É. & Serna, A. Impact of communication modalities on social presence and regulation processes in a collaborative game. J Multimodal User Interfaces 19, 101–118 (2025). <https://doi.org/10.1007/s12193-024-00450-z> https://doi.org/10.1007/s12193-024-00450-z

Park, W., Jamil, M.H. & Eid, M. Vibration feedback reduces perceived difficulty of virtualized fine motor task. J Multimodal User Interfaces 19, 93–99 (2025). <https://doi.org/10.1007/s12193-024-00449-6> https://doi.org/10.1007/s12193-024-00449-6

Brument, H., De Pace, F. & Podkosova, I. Does mixed reality influence joint action? Impact of the mixed reality setup on users’ behavior and spatial interaction. J Multimodal User Interfaces (2024). <https://doi.org/10.1007/s12193-024-00445-w> https://doi.org/10.1007/s12193-024-00445-w

Beatrice Biancardi, Maurizio Mancini, Brian Ravenet & Giovanna Varni. Modelling the “transactive memory system” in multimodal multiparty interactions. J Multimodal User Interfaces 18, 103–117 (2024). <https://doi.org/10.1007/s12193-023-00426-5> https://doi.org/10.1007/s12193-023-00426-5

Juri Yoneyama, Yuichiro Fujimoto, Kosuke Okazaki, Taishi Sawabe, Masayuki Kanbara & Hirokazu Kato. Augmented conversations: AR face filters for facilitating comfortable in-person interactions. J Multimodal User Interfaces 19, 57–74 (2025). <https://doi.org/10.1007/s12193-024-00446-9> https://doi.org/10.1007/s12193-024-00446-9

Sara Falcone, Jan Kolkmeier, Merijn Bruijnes & Dirk Heylen. The multimodal EchoBorg: not as smart as it looks. J Multimodal User Interfaces 16, 293–302 (2022). <https://doi.org/10.1007/s12193-022-00389-z> https://doi.org/10.1007/s12193-022-00389-z

Piercarlo Dondi, Marco Porta, Angelo Donvito & Giovanni Volpe. A gaze-based interactive system to explore artwork imagery. J Multimodal User Interfaces 16, 55–67 (2022). <https://doi.org/10.1007/s12193-021-00373-z> https://doi.org/10.1007/s12193-021-00373-z

Metehan Doyran, Arjan Schimmel, Pınar Baki, Kübra Ergin, Batıkan Türkmen, Almıla Akdağ Salah, Sander C. J. Bakkes, Heysem Kaya, Ronald Poppe & Albert Ali Salah. MUMBAI: multi-person, multimodal board game affect and interaction analysis dataset. J Multimodal User Interfaces 15, 373–391 (2021). <https://doi.org/10.1007/s12193-021-00364-0> https://doi.org/10.1007/s12193-021-00364-0

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-bielefeld.de/mailman2/unibi/public/iva-list/attachments/20251005/a2754953/attachment-0001.html>


More information about the iva-list mailing list