<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p align="center"><b><font size="4">PhD Position<br>
</font></b> </p>
<div style="text-align:center"><i><font size="4">
<p class="MsoNormal"
style="margin-bottom:0cm;margin-bottom:.0001pt;text-align:center;line-height:normal"
align="center"><i><span
style="font-family:"Arial","sans-serif""
lang="EN-US">Learning and Assessment of Public Speaking
in Virtual Reality</span></i></p>
</font></i></div>
<div style="text-align:center"><i><font size="4"><br>
<b><font color="#ff0000">Deadline for application: 8 May 2020</font></b><br>
</font></i></div>
<div style="text-align:center">
<p class="MsoNormal"
style="margin-bottom:0cm;margin-bottom:.0001pt;text-align:center;line-height:normal"
align="center"><i><span>Laboratoire d’Informatique et des
Systèmes (LIS) and Institut des Sciences du Mouvement (ISM)
<span
class="m_-5618057028784884354m_-4972020353523621735m_-1669117398472683246msoDel"><del
datetime="2018-03-09T13:30"></del></span></span></i></p>
Aix-Marseille Université & CNRS</div>
<div> <br>
</div>
<font face="Arial,sans-serif"><b><u>Keywords: </u></b>Human-machine
interaction, Behavioural data collection, Virtual Reality,
Embodied Conversational Agent, Artificial intelligence, Machine
Learning<br>
<br>
</font>
<p><font face="Arial,sans-serif">In the field of e-education and
training, a growing interest has emerged around "training by
simulation" based on virtual environments. Several studies have
been conducted on devices to simulate social interaction with
Embodied Conversational Agents (ECAs) to train one's own social
skills ("Virtual Agents for Social Skills Training", Bruijnes et
al., 2019; Ochs et al., 2019). Some research works have explored
the use of a virtual audience composed of a set of ECAs to train
individuals to speak in public (Chollet et al., 2015; Pertaub et
al., 2002; North et al., 1998).</font></p>
<p><font face="Arial,sans-serif">One of the challenges in the
development of <i>virtual environments for the social skills
training</i> is to be able to automatically evaluate the
user's performance and engagement in the task. In the context of
public speaking, certain objective verbal and non-verbal cues
may reflect the quality of public speaking; for example, the
number of disfluencies in speech, the gestures and the gaze
direction. User engagement, and more specifically in the field
of virtual reality her/his sense of presence and co-presence, is
also reflected through objective verbal and non-verbal cues
(Ochs et al, 2019). The identification of behavioural cues of
the quality of human-machine interaction is a major research
issue. The objective of this thesis project is to explore
objective behavioural and physiological cues of the quality of
interaction in the context of the specific use case of public
speaking in front of a virtual audience in a virtual reality
(VR) environment. This is a major scientific challenge, based on
work in both human movement science and computer science. The
proposed methodology will be based on an interdisciplinary
analysis of the objective cues of human-machine interaction in
this situation, including language analysis (e.g. prosody,
lexicon, syntax), analysis of gestures and postures (e.g.
biomechanical measurement, facial expression) and physiological
clues (e.g. measurement of heart rate, electro-dermal
conductance). These interaction data will be analysed using
machine learning methods to extract behavioural and
physiological cues characteristic of the public speaking
quality. <br>
<br>
</font></p>
<p><font face="Arial,sans-serif">The thesis will be organized around
4 main steps: 1/ Collection and semi-automatic annotation of a
corpus of human-virtual auditory interaction in VR environment;
2/ Identification of social skills from the use case studied; 3/
Identification of the behavioural and physiological cues of
social skills using machine learning techniques; and 4/
Development of an automatic evaluation system of social skills
adapted to the use case.<br>
<br>
</font></p>
<p><font face="Arial,sans-serif">This interdisciplinary thesis will
be carried out in collaboration with <b>two research
laboratories of Aix Marseille University</b>: the <a
moz-do-not-send="true" href="https://ism.univ-amu.fr/fr">ISM</a>
and the <a moz-do-not-send="true"
href="https://www.lis-lab.fr/">LIS</a>. The simulation
technology platform developed at the <a moz-do-not-send="true"
href="https://ism.univ-amu.fr/fr/crvm">Centre de Réalité
Virtuelle Méditerranée</a> (CRVM) of the ISM for public
speaking training in front of a virtual audience will be used.
The evaluation will be carried out in collaboration with experts
in the field (public speaking coach) to compare the output of
the model with the expertise of the coaches.<br>
<br>
</font></p>
<font face="Arial,sans-serif"><span></span><b>Required skills</b><br>
The candidate must have an initial training in Computer Science,
or Movement Sciences or Cognitive Sciences. <br>
Skills in machine learning, experimental methods and statistical
analysis are expected.<br>
Knowledge on Natural Language Processing (NLP), signal processing,
motion capture, Python and R would be appreciated.<br>
Technical knowledge in programming (Java, C++, C Sharp) and Unity
software would be a plus. <br>
</font>
<p><font face="Arial,sans-serif">A multidisciplinary background is a
must. <br>
</font></p>
<p><font face="Arial,sans-serif">The thesis is fully funded by a
3-year doctoral contract (salary between 1400 and 1600 €/month)
in the context of the Inter-Ed Interdisciplinary call of Aix
Marseille University. </font></p>
<font face="Arial,sans-serif">The laboratories provide funding for
professional travel, the candidate's training and participation in
international conferences. <br>
<br>
French language is not required.<br>
</font><br>
<font face="Arial,sans-serif">Aix Marseille University (<a
class="moz-txt-link-freetext" href="http://www.univ-amu.fr/en">http://www.univ-amu.fr/en</a>),
the largest French University, is ideally located on the
Mediterranean coast, and only 1h30 away from the Alps.<br>
<br>
The application files consists of the following documents: <br>
</font>
<ul>
<li><font face="Arial,sans-serif">A detailed curriculum,</font></li>
<li><font face="Arial,sans-serif">A motivation letter,</font></li>
<li><font face="Arial,sans-serif">A description of the academic
background and copy of academic records and most recent
diploma,</font></li>
<li><font face="Arial,sans-serif">A letter of recommendation from
the internship supervisor or the director of the master's
thesis.</font></li>
<li><font face="Arial,sans-serif">The master thesis 1 and/or 2. </font><br>
</li>
</ul>
<font face="Arial,sans-serif"><br>
<b>The application files should be sent before the </b><b>8 may</b><b>
to</b><br>
<i>Magalie Ochs</i>: <a class="moz-txt-link-abbreviated"
href="mailto:magalie.ochs@lis-lab.fr">magalie.ochs@lis-lab.fr</a><br>
<i>Rémy Casanova:</i> <a class="moz-txt-link-abbreviated"
href="mailto:remy.casanova@univ-amu.fr">remy.casanova@univ-amu.fr</a><br>
<br>
For any question, you can contact us by mail. <br>
<br>
<br>
</font>
<p><br>
</p>
</body>
</html>