<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
<div class="">*** With apologies for multiple postings ***</div>
<br class="">
<b class="">Third Call for Papers<br class="">
<br class="">
P-VLAM: People in Vision, Language And the Mind</b><br class="">
<br class="">
Workshop to be held at the 13th Edition of the Language Resources and Evaluation Conference, Palais du Pharo, Marseilles, France, June 2022.<br class="">
<br class="">
<a href="https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fp-vlam.github.io%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=wcb2VzZ9qB9wtw4krhBo1jpNyteIv5%2FcVMLTldxdQPo%3D&reserved=0" originalsrc="https://p-vlam.github.io/" shash="ZzjUg4yPHPyxm9rdkEpb+kpz2/R+niScbY/EZjKKuqb+8PlrtCx4KBUyLqHuv4aLmekrmEoe6+9Tt3Iu4G62gm3d1KPppZURyCUqc+Z6zyxpc/3vwVijmn2IQXyjYGyFS5dVJnLNCOIFYiCyIv2cJscnjyuO3ydLG5+0sWhLMNQ=" class="">https://p-vlam.github.io</a>
<div class=""><br class="">
We invite paper submissions for the second workshop on People in Vision, Language, and the Mind (formerly ONION 2020), which discusses how people, their bodies and faces as well as mental states are described in text with associated images, and modelled in
computational and cognitive terms. We are interested in contributions from diverse areas including language generation, language analysis, cognitive computing, affective computing and multimodal (especially vision and language) modelling.<br class="">
<br class="">
<b class="">Detailed Workshop goals</b><br class="">
<br class="">
The workshop will provide a forum to present and discuss current research focusing on multimodal resources as well as computational and cognitive models aiming to describe people in terms of their bodies and faces, including their affective state as it is reflected physically.
Such models might either generate textual descriptions of people, generate images corresponding to people’s descriptions, or in general exploit multimodal representations for different purposes and applications. Knowledge of the way human bodies and faces are
perceived, understood and described by humans is key to the creation of such resources and models, therefore the workshop also invites contributions where the human body and face are studied from a cognitive, neurocognitive or multimodal communication perspective.<br class="">
<br class="">
Human body postures and faces are being studied by researchers from different research communities, including those working with vision and language modelling, natural language generation, cognitive science, cognitive psychology, multimodal communication and
embodied conversational agents. The workshop aims to reach out to all these communities to explore the many different aspects of research on the human body and face, including the resources that such research needs, and to foster cross-disciplinary synergy. <br class="">
<br class="">
The ability to adequately model and describe people in terms of their body and face is interesting for a variety of language technology applications, e.g., conversational agents and interactive narrative generation, as well as forensic applications in which
people need to be depicted or their images generated from textual or spoken descriptions. Such systems need resources and models where images associated with human bodies and faces are coupled with linguistic descriptions, therefore the research needed to
develop them is placed at the interface between vision and language research. At the same time, this line of research raises important ethical questions, both from the perspective of data collection methodology and from the perspective of bias detection and
avoidance in models trained to process and interpret human attributes.<br class="">
By focusing on the modelling and processing of people, and bringing in relevant insights from the cognitive and neurocognitive fields, the workshop will explore and further develop a particular area within vision and language research.<br class="">
<br class="">
<b class="">Relevant topics</b><br class="">
<br class="">
We are inviting short and long papers reporting original research, surveys, position papers, and demos. Authors are strongly encouraged to identify and discuss ethical issues arising from their work, insofar as it involves the use of image data or descriptions of
people.<br class="">
Relevant topics include, but are not limited to, the following ones:<br class="">
− Datasets of facial images, as well as body postures, gestures and their descriptions<br class="">
− Methods for the creation and annotation of multimodal resources dedicated to the description of people<br class="">
− Methods for the validation of multimodal resources for descriptions of people<br class="">
− Experimental studies of facial expression understanding by humans<br class="">
− Models or algorithms for automatic facial description generation<br class="">
− Emotion recognition by humans<br class="">
− Multimodal automatic emotion recognition from images and text<br class="">
− Subjectivity in face perception<br class="">
− Communicative, relational and intentional aspects of head pose and eye-gaze<br class="">
− Collection and annotation methods for facial descriptions<br class="">
− Coding schemes for the annotation of body posture and facial expression<br class="">
− Understanding and description of the human face and body in different contexts, including commercial applications, art, forensics, etc.<br class="">
− Modelling of the human body, face and facial expressions for embodied conversational agents<br class="">
− Generation of full-body images and/or facial images from textual descriptions<br class="">
− Ethical and data protection issues related to the collection and/or automatic description of images of real people<br class="">
− Any form of bias in models which seek to make sense of human physical attributes in language and vision. <br class="">
<br class="">
<b class="">Important dates</b><br class="">
<br class="">
Paper submission deadline: April 8, 2022 <br class="">
Notification of acceptance: May 3, 2022<br class="">
Camera ready Papers: May 23, 2022<br class="">
Workshop: June 20, 2022<br class="">
Submission guidelines<br class="">
<br class="">
Short paper submissions may consist of up to 4 pages of content, while long papers may have up to 8 pages of content. References and appendices do not count towards these page limits.<br class="">
All submissions must follow the LREC 2022 style files, which are available for LaTeX (preferred) and MS Word and can be retrieved from the following address: <a href="https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flrec2022.lrec-%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=VXxMxSGqW%2FfxetI11teB7rkAG96wGTLPcgf4Q%2B4sTcY%3D&reserved=0" originalsrc="https://lrec2022.lrec-/" shash="ZEHRd1mp1TUMw6qIydcn3gNo1g8VTJOrCkvQFf4OiqubQksl625hAHnT6qVJcpNX2b+Q6/S/1hQ/CvlGoYgvBAjtlb9EnNF2SIEAFcbEL4IQEd/q43MUgQvK5RghqRHtPX9v5ig18EjzfA51wH4cPsPGsd3ycEeb0N1N+28xUgg=" class="">https://lrec2022.lrec-</a><a href="http://conf.org/en/submission2022/authors-kit/" class="">conf.org/en/submission2022/authors-kit/</a><br class="">
<br class="">
Papers must be submitted digitally, in PDF format, and uploaded through the START online submission system here:<br class="">
<br class="">
<a href="https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.softconf.com%2Flrec2022%2FP-VLAM%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=aoRirnpli8AdvJ%2BSUHj7jhmh2YJGHWJFoOe5EUyp8z4%3D&reserved=0" originalsrc="https://www.softconf.com/lrec2022/P-VLAM/" shash="rHslnwrXT8v57stEOOvrscZjsS+wKMgOqTljmO8z0LlJ+BwvJ7VbCeOCT5JM9M2Y6vjGAT6KiOsClwAhWah59r+rNQlCf5XKGvtT4/B0LVb/2nC5sv6dvu/GXXSurx83ZmKVaWOnt3hRmJkne091QVpQsZ24UfpX/gQNdMH98xw=" class="">https://www.softconf.com/lrec2022/P-VLAM/</a></div>
<div class=""><br class="">
The authors of accepted papers will be required to submit a camera-ready version to be included in the final proceedings. Authors of accepted papers will be notified after the notification of acceptance with further details.<br class="">
<br class="">
Identify, Describe and Share your LRs!<br class="">
<br class="">
● Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences). To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.),
authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing
to creating a common repository where everyone can deposit and share data.<br class="">
● As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2022 endorses the need to uniquely Identify LRs through the
use of the International Standard Language Resource Number (ISLRN, <a href="https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.islrn.org%2F&data=04%7C01%7Cpaggio%40hum.ku.dk%7C43b6e92bd5fc42b28f5608d9ead7fa2d%7Ca3927f91cda14696af898c9f1ceffa91%7C0%7C0%7C637799037398136143%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=tgysZ%2FySKPx0IdkTdBoqTilEfM3bJZMF1pVIg0Exn%2Fc%3D&reserved=0" originalsrc="http://www.islrn.org/" shash="eyDA8+n7qC00DwNgFSyw5Xa0jq4Gk9kRyzEpBwZxVvEWqTz5xgcCpC9tmA7UuMCxa3vXGNX4bSOe7VC2bdXz6YAmG9USV/hAJTukoe2/MOkD4O6ZyfwV8Pg3W6tuBnV/t/NrxJshpWYgpInTT+t5BFCAz688UrSrnnfoiqENRxg=" class="">www.islrn.org</a>), a
Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers will be offered at submission time.<br class="">
<br class="">
<b class="">Organisers</b><br class="">
<br class="">
Patrizia Paggio, University of Copenhagen and University of Malta, <a href="mailto:paggio@hum.ku.dk" class="">paggio@hum.ku.dk</a></div>
<div class="">Albert Gatt, Utrecht University and University of Malta, <a href="mailto:a.gatt@uu.nl" class="">a.gatt@uu.nl</a></div>
<div class="">Marc Tanti, University of Malta, <a href="mailto:marc.tanti@um.edu.mt" class="">marc.tanti@um.edu.mt</a></div>
<div class=""><br class="">
</div>
<div class=""><b class="">Programme Committee</b></div>
<div class=""><br class="">
</div>
<div class="">See workshop’s web site</div>
<div class=""><br class="">
</div>
<div class="">*****************</div>
<br class="">
<br class="">
<div class="">
<div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div dir="auto" class="" style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div class="">Patrizia Paggio<br class="">
<br class="">
Professor<br class="">
University of Malta<br class="">
Institute of Linguistics and Language Technology<br class="">
<a href="mailto:patrizia.paggio@um.edu.mt" class="">patrizia.paggio@um.edu.mt</a></div>
<div class=""><br class="">
</div>
<div class="">Senior Researcher<br class="">
University of Copenhagen<br class="">
Centre for Language Technology<br class="">
<a href="mailto:paggio@hum.ku.dk" class="">paggio@hum.ku.dk</a><br class="">
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>