<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1258">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<div style="font-family:Calibri,Arial,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
<span>[Apologies for cross-posting] <br>
</span>
<div><br>
</div>
<div>***Submission deadline: September 20th, 2020***<br>
</div>
<div><br>
</div>
<div>----------------1st CALL FOR PAPERS------------------------------------<br>
</div>
<div><br>
</div>
<div>NL4XAI @ INLG2019<br>
</div>
<div>2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI2020)<br>
</div>
<div>December 15 - 18, Dublin, Ireland<br>
</div>
<div><br>
</div>
<div><a href="https://sites.google.com/view/nl4xai2020" target="_blank" rel="noopener noreferrer">https://sites.google.com/view/nl4xai2020</a>
<br>
</div>
<div><br>
</div>
<div>This workshop will be held as part of the 13th International Conference on Natural Language Generation (INLG2020), which is supported by the Special Interest Group on NLG of the Association for Computational Linguistics. INLG2020 is to be held in Dublin
(Ireland), 15 - 18 December, 2020.<br>
</div>
<div><br>
</div>
<div><b>However, due to covid-19 it is very likely to be arranged online in a virtual format.</b><br>
</div>
<div><br>
</div>
<div>CALL FOR PAPERS<br>
</div>
<div><br>
</div>
<div>The focus of this workshop is on the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. NL technologies, both NL Generation (NLG) and NL Processing (NLP) techniques,
are expected to enhance knowledge extraction and representation through human-machine interaction (HMI). As remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), "even though current AI systems offer many benefits
in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans". Accordingly, users without a strong background on AI, require a new generation of Explainable AI systems. They are expected to naturally interact
with humans, thus providing comprehensible explanations of decisions automatically made. The ultimate goal is building trustworthy AI that is beneficial to people through fairness, transparency and explainability. To achieve it, not only technical but also
ethical and legal issues must be carefully considered.<br>
</div>
<div><br>
</div>
<div>We solicit contributions in the form of regular papers (up to 4 pages + 1 references in the ACL paper format) or demo papers (up to 2 pages) dealing with research topics in any of the many aspects concerned with Explainable AI systems.<br>
</div>
<div><br>
</div>
<div>Submissions should be made through <a href="https://easychair.org/conferences/?conf=nl4xai2020" target="_blank" rel="noopener noreferrer">
https://easychair.org/conferences/?conf=nl4xai2020</a> <br>
</div>
<div><br>
</div>
<div>TOPICS (include, but are not limited to)<br>
</div>
<div>+ Definitions and Theoretical Issues on Explainable AI<br>
</div>
<div>+ Interpretable Models versus Explainable AI systems<br>
</div>
<div>+ Explaining black-box models<br>
</div>
<div>+ Explaining Bayes Networks<br>
</div>
<div>+ Explaining Fuzzy Systems<br>
</div>
<div>+ Explaining Logical Formulas<br>
</div>
<div>+ Multi-modal Semantic Grounding and Model Transparency<br>
</div>
<div>+ Explainable Models for Text Production<br>
</div>
<div>+ Verbalizing Knowledge Bases<br>
</div>
<div>+ Models for Explainable Recommendations<br>
</div>
<div>+ Interpretable Machine Learning<br>
</div>
<div>+ Self-explanatory Decision-Support Systems<br>
</div>
<div>+ Explainable Agents<br>
</div>
<div>+ Argumentation Theory for Explainable AI<br>
</div>
<div>+ Natural Language Generation for Explainable AI<br>
</div>
<div>+ Interpretable Human-Machine Multi-modal Interaction<br>
</div>
<div>+ Metrics for Explainability Evaluation<br>
</div>
<div>+ Usability of Explainable AI/interfaces<br>
</div>
<div>+ Applications of Explainable AI Systems<br>
</div>
<div><br>
</div>
<div>IMPORTANT DATES<br>
</div>
<div><br>
</div>
<div><b>Tentative Title and Authors: July 20, 2020</b><br>
</div>
<div>Submissions: September 20, 2020<br>
</div>
<div>Notification of acceptance: October 20, 2020<br>
</div>
<div>Camera-ready papers: November 20, 2020<br>
</div>
<div>Workshop session: December 15 - 18, 2020<br>
</div>
<div><br>
</div>
<div>PUBLICATION<br>
</div>
<div>All accepted papers will be published in the ACL Anthology. The papers will undergo a peer reviewing process by members of the workshop's program/reviewing committee, assessing their relevance and originality for the workshop.<br>
</div>
<div><br>
</div>
<div>ORGANIZERS<br>
</div>
<div>José M. Alonso (Centro Singular de Investigacion en Tecnoloxias Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Spain)<br>
</div>
<div>Alejandro Catala (Centro Singular de Investigacion en Tecnoloxias Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Spain)<br>
</div>
<div><br>
</div>
<div>PROGRAM COMMITTEE (still to be expanded)<br>
</div>
<div>+ Alberto Bugarin, CiTIUS, University of Santiago de Compostela<br>
</div>
<div>+ Katarzyna Budzynska, Institute of Philosophy and Sociology of the Polish Academy of Sciences<br>
</div>
<div>+ Claire Gardent, CNRS-LORIA <br>
</div>
<div>+ Pablo Gamallo, CiTIUS, University of Santiago de Compostela<br>
</div>
<div>+ Marcin Koszowy, Warsaw University of Technology<br>
</div>
<div>+ Simon Mille, Universitat Pompeu Fabra<br>
</div>
<div>+ Nir Oren, University of Aberdeen<br>
</div>
<div>+ Martiìn Pereira-Fariña, University of Santiago de Compostela<br>
</div>
<div>+ Ehud Reiter, University of Aberdeen, Arria NLG plc.<br>
</div>
<div>+ Carles Sierra, Institute of Research on Artificial Intelligence (IIIA), Spanish National Research Council (CSIC)<br>
</div>
<div>+ Mariët Theune, Human Media Interaction, University of Twente <br>
</div>
<span></span><br>
</div>
<div>
<div style="font-family:Calibri,Arial,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
Best regards</div>
</div>
<br>
</div>
<div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div id="Signature">
<div>
<div>
<table style="font-size:9pt; color:#444444">
<tbody>
<tr>
<td rowspan="4" style="padding-right:5px">
<div>
<table style="font-size:9pt; color:#444444">
<tbody>
<tr>
<td rowspan="4" style="padding-right:5px"><a href="http://citius.usc.es/"><span><img class="EmojiInsert" alt="CiTIUS" data-outlook-trace="F:1|T:1" src="cid:d86aa19c-c2da-421b-9cd3-e93185f1316c"></span></a></td>
<td><a href="http://citius.usc.es/v/alejandro.catala" style="color:#222222"><span>Alejandro Catalá</span></a></td>
</tr>
<tr>
<td><span style="color:#222222">Researcher/Investigador Juan de la Cierva</span></td>
</tr>
<tr>
<td><a href="mailto:alejandro.catala@usc.es"><img class="EmojiInsert" alt="E-mail:" width="11" height="11" data-outlook-trace="F:1|T:1" src="cid:49dd305d-a8ec-433d-a633-2a280becae4f"><span style="color:#222222"> alejandro.catala@usc.es</span></a> ·
<img class="EmojiInsert" alt="Phone:" width="11" height="11" data-outlook-trace="F:1|T:1" src="cid:7dd20b35-96a4-4be4-8fe3-616bcc71c8d9"><span style="color:#222222"> +34 881816460</span></td>
</tr>
<tr>
<td><a href="http://citius.usc.es"><span><img class="EmojiInsert" alt="Website:" width="11" height="11" data-outlook-trace="F:1|T:1" src="cid:370916e1-187d-4047-9fa0-3c06c7c0a09f"> citius.usc.es</span></a> ·
<a href="http://twitter.com/acatalaHCI"><span><img class="EmojiInsert" alt="Twitter:" width="11" height="11" data-outlook-trace="F:1|T:1" src="cid:2596d4e1-d405-4c79-9be1-ef0e3fd1e1bb"> acatalaHCI</span></a></td>
</tr>
</tbody>
</table>
</div>
<br>
</td>
<td><br>
</td>
</tr>
<tr>
<td><br>
</td>
</tr>
<tr>
<td><br>
</td>
</tr>
<tr>
<td><br>
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</body>
</html>