<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class=""><span style="caret-color: rgb(0, 0, 0);" class="">[Apologies
        for cross-posting]</span></div>
    <div class=""><span style="caret-color: rgb(0, 0, 0);" class=""><br
          class="">
      </span></div>
    <div class=""><b class=""><span style="caret-color: rgb(0, 0, 0);"
          class="">We are pleased to announce that the deadline for
          submission to the SIVA’23 workshop has been extended
          to: September, 20 2022</span></b></div>
    <div class="">
      <div class="" style="caret-color: rgb(0, 0, 0);">
        <div class="" style="word-wrap: break-word; -webkit-nbsp-mode:
          space; line-break: after-white-space;">
          <div class="">
            <div class="">
              <div class="" style="word-wrap: break-word;
                -webkit-nbsp-mode: space; line-break:
                after-white-space;">
                <div class="">
                  <div class="">
                    <div dir="auto" class="" style="word-wrap:
                      break-word; -webkit-nbsp-mode: space; line-break:
                      after-white-space;">
                      <div dir="auto" class="" style="word-wrap:
                        break-word; -webkit-nbsp-mode: space;
                        line-break: after-white-space;">
                        <div class=""><br class="">
                          CALL FOR PAPERS: SIVA'23<br class="">
                          Workshop on Socially Interactive Human-like
                          Virtual Agents<br class="">
                          From expressive and context-aware multimodal
                          generation of digital humans to understanding
                          the social cognition of real humans<br
                            class="">
                          <br class="">
                          Submission:  <a
                            href="https://cmt3.research.microsoft.com/SIVA2023"
                            class="moz-txt-link-freetext">https://cmt3.research.microsoft.com/SIVA2023</a><br
                            class="">
                          SIVA'23 workshop: January, 4 2023, Waikoloa,
                          Hawaii, <a
                            href="https://www.stms-lab.fr/agenda/siva/detail/"
                            class="moz-txt-link-freetext">https://www.stms-lab.fr/agenda/siva/detail/</a><br
                            class="">
                          FG 2023 conference: January 4-8 2023,
                          Waikoloa, Hawaii, <a
                            href="https://fg2023.ieee-biometrics.org/"
                            class="moz-txt-link-freetext">https://fg2023.ieee-biometrics.org/</a></div>
                        <div class=""><br class="">
                        </div>
                        <div class="">IMPORTANT DATES<br class="">
                          <br class="">
                          Submission Deadline <strike class="">September,
                            12 2022</strike> September, 20 2022<br
                            class="">
                          Notification of Acceptance: October, 15 2022 <br
                            class="">
                          Camera-ready deadline: October, 31 2022<br
                            class="">
                          Workshop: January, 4 2023<br class="">
                          <br class="">
                          OVERVIEW<br class="">
                          <br class="">
                          Due to the rapid growth of virtual, augmented,
                          and hybrid reality together with spectacular
                          advances in artificial intelligence, the
                          ultra-realistic generation and animation of
                          digital humans with human-like behaviors is
                          becoming a massive topic of interest. This
                          complex endeavor requires modeling several
                          elements of human behavior including the
                          natural coordination of multimodal behaviors
                          including text, speech, face, and body, plus
                          the contextualization of behavior in response
                          to interlocutors of different cultures and
                          motivations. Thus, challenges in this topic
                          are two folds—the generation and animation of
                          coherent multimodal behaviors, and modeling
                          the expressivity and contextualization of the
                          virtual agent with respect to human behavior,
                          plus understanding and modeling virtual agent
                          behavior adaptation to increase human’s
                          engagement. The aim of this workshop is to
                          connect traditionally distinct communities
                          (e.g., speech, vision, cognitive
                          neurosciences, social psychology) to elaborate
                          and discuss the future of human interaction
                          with human-like virtual agents. We expect
                          contributions from the fields of signal
                          processing, speech and vision, machine
                          learning and artificial intelligence,
                          perceptual studies, and cognitive and
                          neuroscience. Topics will range from
                          multimodal generative modeling of virtual
                          agent behaviors, and speech-to-face and
                          posture 2D and 3D animation, to original
                          research topics including style, expressivity,
                          and context-aware animation of virtual agents.
                          Moreover, the availability of controllable
                          real-time virtual agent models can be used as
                          state-of-the-art experimental stimuli and
                          confederates to design novel, groundbreaking
                          experiments to advance understanding of social
                          cognition in humans. Finally, these virtual
                          humans can be used to create virtual
                          environments for medical purposes including
                          rehabilitation and training.<br class="">
                          <br class="">
                          SCOPE<br class="">
                          <br class="">
                          Topics of interest include but are not limited
                          to:<br class="">
                          <br class="">
                          + Analysis of Multimodal Human-like Behavior<br
                            class="">
                          - Analyzing and understanding of human
                          multimodal behavior (speech, gesture, face)<br
                            class="">
                          - Creating datasets for the study and modeling
                          of human multimodal behavior<br class="">
                          - Coordination and synchronization of human
                          multimodal behavior<br class="">
                          - Analysis of style and expressivity in human
                          multimodal behavior<br class="">
                          - Cultural variability of social multimodal
                          behavior<br class="">
                          <br class="">
                          + Modeling and Generation of Multimodal
                          Human-like Behavior<br class="">
                          - Multimodal generation of human-like behavior
                          (speech, gesture, face)<br class="">
                          - Face and gesture generation driven by text
                          and speech<br class="">
                          - Context-aware generation of multimodal
                          human-like behavior<br class="">
                          - Modeling of style and expressivity for the
                          generation of multimodal behavior<br class="">
                          - Modeling paralinguistic cues for multimodal
                          behavior generation<br class="">
                          - Few-shots or zero-shot transfer of style and
                          expressivity<br class="">
                          - Slightly-supervised adaptation of multimodal
                          behavior to context<br class="">
                          <br class="">
                          + Psychology and Cognition of of Multimodal
                          Human-like Behavior<br class="">
                          - Cognition of deep fakes and ultra-realistic
                          digital manipulation of human-like behavior<br
                            class="">
                          - Social agents/robots as tools for capturing,
                          measuring and understanding multimodal
                          behavior (speech, gesture, face)<br class="">
                          - Neuroscience and social cognition of real
                          humans using virtual agents and physical
                          robots<br class="">
                          <br class="">
                          VENUE<br class="">
                          <br class="">
                          The SIVA workshop is organized as a satellite
                          workshop of the IEEE International Conference
                          on Automatic Face and Gesture Recognition
                          2023. The workshop will be collocated with the
                          FG 2023 and WACV 2023 conferences at the
                          Waikoloa Beach Marriott Resort, Hawaii, USA.<br
                            class="">
                          <br class="">
                          ADDITIONAL INFORMATION AND SUBMISSION DETAILS<br
                            class="">
                          <br class="">
                          Submissions must be original and not published
                          or submitted elsewhere.  Short papers of 3
                          pages excluding references encourage
                          submissions of early research in original
                          emerging fields. Long paper of 6 to 8 pages
                          excluding references promote the presentation
                          of strongly original contributions, positional
                          or survey papers. The manuscript should be
                          formatted according to the Word or Latex
                          template provided on the workshop website.
                           All submissions will be reviewed by 3
                          reviewers. The reviewing process will be
                          single-blinded. Authors will be asked to
                          disclose possible conflict of interests, such
                          as cooperation in the previous two years.
                          Moreover, care will be taken to avoid
                          reviewers from the same institution as the
                          authors.  Authors should submit their articles
                          in a single pdf file in the submission website
                          - no later than September, 12 2022.
                          Notification of acceptance will be sent by
                          October, 15 2022, and the camera-ready version
                          of the papers revised according to the
                          reviewers comments should be submitted by
                          October, 31 2022. Accepted papers will be
                          published in the proceedings of the FG'2023
                          conference. More information can be found on
                          the SIVA website.<br class="">
                          <br class="">
                          DIVERSITY, EQUALITY, AND INCLUSION<br class="">
                          <br class="">
                          The format of this workshop will be hybrid
                          online and onsite. This format proposes format
                          of scientific exchanges in order to satisfy
                          travel restrictions and COVID sanitary
                          precautions, to promote inclusion in the
                          research community (travel costs are high,
                          online presentations will encourage research
                          contributions from geographical regions which
                          would normally be excluded), and to consider
                          ecological issues (e.g., CO2 footprint). The
                          organizing committee is committed to paying
                          attention to equality, diversity, and
                          inclusivity in consideration of invited
                          speakers. This effort starts from the
                          organizing committee and the invited speakers
                          to the program committee.<br class="">
                          <br class="">
                          <br class="">
                          ORGANIZING COMMITTEE<br class="">
                          🌸 Nicolas Obin, STMS Lab (Ircam, CNRS,
                          Sorbonne Université, ministère de la Culture)<br
                            class="">
                          🌸 Ryo Ishii, NTT Human Informatics
                          Laboratories<br class="">
                          🌸 Rachael E. Jack, University of Glasgow<br
                            class="">
                          🌸 Louis-Philippe Morency, Carnegie Mellon
                          University<br class="">
                          🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne
                          Université</div>
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </div>
    <div class=""><br class="">
    </div>
    <div class=""><br class="">
    </div>
    <br class="">
  </body>
</html>