[iva] PhD Position - Learning and Assessment of Public Speaking in Virtual Reality, Aix Marseille University, France

Magalie Ochs magalie.ochs at lis-lab.fr
Wed Apr 15 16:36:45 CEST 2020


*PhD Position
*

/

/Learning and Assessment of Public Speaking in Virtual Reality/

/
/
*Deadline for application: 8 May 2020*
/

/Laboratoire d’Informatique et des Systèmes (LIS) and Institut des 
Sciences du Mouvement (ISM) /

Aix-Marseille Université & CNRS

*_Keywords: _*Human-machine interaction, Behavioural data collection, 
Virtual Reality, Embodied Conversational Agent, Artificial intelligence, 
Machine Learning

In the field of e-education and training, a growing interest has emerged 
around "training by simulation" based on virtual environments. Several 
studies have been conducted on devices to simulate social interaction 
with Embodied Conversational Agents (ECAs) to train one's own social 
skills ("Virtual Agents for Social Skills Training", Bruijnes et al., 
2019; Ochs et al., 2019). Some research works have explored the use of a 
virtual audience composed of a set of ECAs to train individuals to speak 
in public (Chollet et al., 2015; Pertaub et al., 2002; North et al., 1998).

One of the challenges in the development of /virtual environments for 
the social skills training/  is to be able to automatically evaluate the 
user's performance and engagement in the task. In the context of public 
speaking, certain objective verbal and non-verbal cues may reflect the 
quality of public speaking; for example, the number of disfluencies in 
speech, the gestures and the gaze direction. User engagement, and more 
specifically in the field of virtual reality her/his sense of presence 
and co-presence, is also reflected through objective verbal and 
non-verbal cues (Ochs et al, 2019). The identification of behavioural 
cues of the quality of human-machine interaction is a major research 
issue. The objective of this thesis project is to explore objective 
behavioural and physiological cues of the quality of interaction in the 
context of the specific use case of public speaking in front of a 
virtual audience in a virtual reality (VR) environment. This is a major 
scientific challenge, based on work in both human movement science and 
computer science. The proposed methodology will be based on an 
interdisciplinary analysis of the objective cues of human-machine 
interaction in this situation, including language analysis (e.g. 
prosody, lexicon, syntax), analysis of gestures and postures (e.g. 
biomechanical measurement, facial expression) and physiological clues 
(e.g. measurement of heart rate, electro-dermal conductance). These 
interaction data will be analysed using machine learning methods to 
extract behavioural and physiological cues characteristic of the public 
speaking quality.

The thesis will be organized around 4 main steps: 1/ Collection and 
semi-automatic annotation of a corpus of human-virtual auditory 
interaction in VR environment; 2/ Identification of social skills from 
the use case studied; 3/ Identification of the behavioural and 
physiological cues of social skills using machine learning techniques; 
and 4/ Development of an automatic evaluation system of social skills 
adapted to the use case.

This interdisciplinary thesis will be carried out in collaboration with 
*two research laboratories of Aix Marseille University*: the ISM 
<https://ism.univ-amu.fr/fr> and the LIS <https://www.lis-lab.fr/>.  The 
simulation technology platform developed at the Centre de Réalité 
Virtuelle Méditerranée <https://ism.univ-amu.fr/fr/crvm> (CRVM) of the 
ISM for public speaking training in front of a virtual audience will be 
used. The evaluation will be carried out in collaboration with experts 
in the field (public speaking coach) to compare the output of the model 
with the expertise of the coaches.

*Required skills*
The candidate must have an initial training in Computer Science, or 
Movement Sciences or Cognitive Sciences.
Skills in machine learning, experimental methods and statistical 
analysis are expected.
Knowledge on Natural Language Processing (NLP), signal processing, 
motion capture, Python and R would be appreciated.
Technical knowledge in programming (Java, C++, C Sharp) and Unity 
software would be a plus.

A multidisciplinary background is a must.

The thesis is fully funded by a 3-year doctoral contract (salary between 
1400 and 1600 €/month) in the context of the Inter-Ed Interdisciplinary 
call of Aix Marseille University.

The laboratories provide funding for professional travel, the 
candidate's training and participation in international conferences.

French language is not required.

Aix Marseille University (http://www.univ-amu.fr/en), the largest French 
University, is ideally located on the Mediterranean coast, and only 1h30 
away from the Alps.

The application files consists of the following documents:

  * A detailed curriculum,
  * A motivation letter,
  * A description of the academic background and copy of academic
    records and most recent diploma,
  * A letter of recommendation from the internship supervisor or the
    director of the master's thesis.
  * The master thesis 1 and/or 2.


*The application files should be sent before the **8 may**to*
/Magalie Ochs/: magalie.ochs at lis-lab.fr
/Rémy Casanova:/ remy.casanova at univ-amu.fr

For any question, you can contact us by mail.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-bielefeld.de/mailman2/unibi/public/iva-list/attachments/20200415/a6422b54/attachment-0001.html>


More information about the iva-list mailing list