<html aria-label="message body"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>Dear colleagues,</div><div><br></div><div>we have edited and just published a <a href="https://journals.uic.edu/ojs/index.php/dad/issue/view/781">special issue</a> of the open access journal <a href="https://dialogue-and-discourse.org/">Dialogue & Discourse</a>, titled "<span style="font-family: IBMPlexSans;">Embodied Conversational Systems in Human–Robot Interaction". This issue emerged from a series of workshops on Natural Language Generation in Human-Robot Interaction (at </span><a href="https://purl.org/nlg-hri-workshop/2020"><span style="font-family: IBMPlexSans;"></span></a><a href="http://purl.org/nlg-hri-workshop/2018">INLG 2018</a> and<span style="font-family: IBMPlexSans;"> </span><a href="https://purl.org/nlg-hri-workshop/2020">HRI2020/INLG 2020</a><span style="font-family: IBMPlexSans;">) and a subsequent special session at SIGdial 2022.</span></div><div><br></div><div>We invite you to take a look at and read the the articles in the issue and hope that the collection will inspire further research and discussion.</div><div><br></div><div><b>Embodied conversational systems in human–robot interaction: Introduction to the special issue </b></div><blockquote style="margin: 0px 0px 0px 40px; border: medium; padding: 0px;"><div>Dimitra Gkatzia, Hendrik Buschmeier, Mary Ellen Foster, Carl Strathearn</div><div><a href="https://doi.org/10.5210/dad.2025.301">https://doi.org/10.5210/dad.2025.301</a></div></blockquote><div><div><br></div><div><b>Laughter use by virtual agents increases task success</b></div></div><blockquote style="margin: 0px 0px 0px 40px; border: medium; padding: 0px;"><div>Bogdan Ludusan, Petra Wagner</div><div><a href="https://doi.org/10.5210/dad.2025.302">https://doi.org/10.5210/dad.2025.302</a></div></blockquote><div><div><br></div><div><b>A modular architecture for creating multimodal embodied agents with an episodic Knowledge Graph as an explainable and controllable long-term memory</b></div></div><blockquote style="margin: 0px 0px 0px 40px; border: medium; padding: 0px;"><div>Thomas Baier, Selene Báez Santamaría, Piek Vossen</div><div><a href="https://doi.org/10.5210/dad.2025.303">https://doi.org/10.5210/dad.2025.303</a></div></blockquote><div><div><br></div><div><b>A graph-to-text approach to knowledge-grounded response generation in human–robot interaction</b></div></div><blockquote style="margin: 0px 0px 0px 40px; border: medium; padding: 0px;"><div>Nicholas Thomas Walker, Stefan Ultes, Pierre Lison</div><div><a href="https://doi.org/10.5210/dad.2025.304">https://doi.org/10.5210/dad.2025.304</a></div></blockquote><div><div><br></div><div><b>Prior lessons of incremental dialogue and robot action management for the age of language models</b></div></div><blockquote style="margin: 0px 0px 0px 40px; border: medium; padding: 0px;"><div>Casey Kennington, Pierre Lison, David Schlangen</div><div><a href="https://doi.org/10.5210/dad.2025.305">https://doi.org/10.5210/dad.2025.305</a></div></blockquote><div><br></div><div>Best regards</div><div><br></div><div><div style="font-family: IBMPlexSans;">Hendrik Buschmeier, Dimitra Gkatzia, Mary Ellen Foster, Carl Strathearn</div><div style="font-family: IBMPlexSans;">D&D SI Guest Editors<br></div></div><br class="Apple-interchange-newline"><div>
<meta charset="UTF-8"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="text-align: start; text-indent: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div dir="auto" style="text-align: start; text-indent: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div>-- <br>Hendrik Buschmeier</div><div>Digital Linguistics Lab</div><div>Faculty of Linguistics and Literary Studies, Bielefeld University<br>https://purl.org/net/hbuschme</div></div></div></div></div></div></div></div>
</div>
<br></div></body></html>