<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal">* 2<sup>nd</sup> Call for Papers *<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild<o:p></o:p></p>
<p class="MsoNormal"><a href="https://sites.google.com/view/modeling-multimodal-data/home">https://sites.google.com/view/modeling-multimodal-data/home</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">To be held remotely as part of the 22nd ACM International Conference on Multimodal Interaction (ICMI)<o:p></o:p></p>
<p class="MsoNormal"><a href="https://icmi.acm.org/2020/">https://icmi.acm.org/2020/</a><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Submission deadline: 17 August 2020 <o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><span style="font-size:12.0pt;color:black">===========================================================</span><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">This workshop is part of the 22nd ACM International Conference on Multimodal Interaction (ICMI) to be held remotely due to COVID-19.<o:p></o:p></p>
<p class="MsoNormal">Machine Learning (ML) on sensor data of human users in the wild poses numerous technical, theoretical, and ethical challenges. Modeling of intelligent adaptive systems in human-computer or human-robot interaction (HCI, HRI) can be based
on a broad spectrum of multimodal signals. For example, eye gaze or EEG may reveal cues about a person’s focus of attention, and multimodal data from posture, voice, or facial expressions are frequently examined to predict user engagement, affective responses,
or impending breakdown of the interaction. In short, multimodal signal processing is essential for the design of more intelligent, adaptive, or even empathic applications and systems in the wild. However, important issues remain largely unresolved: Starting
from low-level processing and integration of noisy data streams, over theoretical pitfalls (e.g., concerning the role of context), up to increasingly pressing ethical questions about what artificial systems and ML can and should do. In this workshop, we will
provide a forum for discussion of the state-of-the-art in modeling user states from multimodal signals in the wild.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">---------------------------------------------------------------------------------------------------------<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Invited Speaker:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Tony Belpaeme, Ghent University, Belgium<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">---------------------------------------------------------------------------------------------------------<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">We welcome the presentation of evaluation studies, field research, theoretical and interdisciplinary contributions, novel modeling approaches, and insights on the ethical design of intelligent adaptive systems in the wild, including:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">* Multimodal modeling of emotions and social action tendencies<o:p></o:p></p>
<p class="MsoNormal">* Multimodal engagement, attention, stress, memory, and workload estimation<o:p></o:p></p>
<p class="MsoNormal">* Modeling and response estimation with biological signals (e.g., EEG, EDA, EMG, HR)<o:p></o:p></p>
<p class="MsoNormal">* Eye tracking and shared attention<o:p></o:p></p>
<p class="MsoNormal">* Studies bridging multimodal research between the laboratory and the wild<o:p></o:p></p>
<p class="MsoNormal">* Case studies involving rich and dynamic signal modeling<o:p></o:p></p>
<p class="MsoNormal">* Cognition-adaptive human-computer interfaces<o:p></o:p></p>
<p class="MsoNormal">* Experiment design for cognitive processes<o:p></o:p></p>
<p class="MsoNormal">* Ethical perspectives on current Human-Computer Interaction (HCI), Human-Robot Interaction (HRI) and applications of Artificial Intelligence (AI)<o:p></o:p></p>
<p class="MsoNormal">* Interdisciplinary collaborations to understand the underpinnings of multimodal data<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">---------------------------------------------------------------------------------------------------------<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Submission instructions:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Submissions of short (4 pages) and long (8 pages) should follow the ACM conference proceedings format. Links to the templates are available on the ACM website (https://www.acm.org/publications/proceedings-template). Each paper will be sent
to at least two expert reviewers and will have one of the organizers assigned as editor. Review will be double-blind. Papers should be submitted via the Microsoft Conference Management Toolkit (CMT; https://cmt3.research.microsoft.com/MSECP2020). Workshop
proceedings will be separate from the main conference proceedings.<o:p></o:p></p>
<p class="MsoNormal">**Workshop papers will be indexed by ACM.**<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">---------------------------------------------------------------------------------------------------------<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Important dates:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Submission deadline: *17 August 2020*<o:p></o:p></p>
<p class="MsoNormal">Notifications of acceptance: 7 September 2020<o:p></o:p></p>
<p class="MsoNormal">Camera-ready versions 30 September 2020<o:p></o:p></p>
<p class="MsoNormal">Workshop date: TBD (either 25 or 29 October 2020)<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">---------------------------------------------------------------------------------------------------------<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Organizing Committee:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Dennis Küster, University of Bremen, Germany<o:p></o:p></p>
<p class="MsoNormal">Felix Putze, University of Bremen, Germany<o:p></o:p></p>
<p class="MsoNormal">Patrícia Alves-Oliveira, University of Washington, USA<o:p></o:p></p>
<p class="MsoNormal">Maike Paetzel, Uppsala University, Sweden<o:p></o:p></p>
<p class="MsoNormal">Tanja Schultz, University of Bremen, Germany<o:p></o:p></p>
</div>
<!DOCTYPE html>
<title>Page Title</title>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/
<br>
<br>
E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy
</body>
</html>