<html><head></head><body><div style="font-family: Verdana;font-size: 12.0px;"><div style="font-family: Verdana;font-size: 12.0px;">
<div>CFP for the 2nd Workshop on Explainable Logic-Based Knowledge Representation (XLoKR 2021)</div>
<div><br/>
co-located with KR 2021<br/>
<a href="https://kr2021.kbsg.rwth-aachen.de" target="_blank">https://kr2021.kbsg.rwth-aachen.de</a></div>
<div> </div>
<div>6-8 November 2021 (exact date(s) TBD), Hanoi, Vietnam (virtually)<br/>
<a href="https://xlokr21.ai.vub.ac.be/" target="_blank">https://xlokr21.ai.vub.ac.be/</a></div>
<div> </div>
<div> </div>
<div>==================================================<br/>
Workshop Description<br/>
==================================================</div>
<div> </div>
<div>Embedded or cyber-physical systems that interact autonomously with the<br/>
real world, or with users they are supposed to support, must continuously<br/>
make decisions based on sensor data, user input, knowledge they have<br/>
acquired during runtime as well as knowledge provided during design-time.<br/>
To make the behavior of such systems comprehensible, they need to be<br/>
able to explain their decisions to the user or, after something has<br/>
gone wrong, to an accident investigator.</div>
<div> </div>
<div>While systems that use Machine Learning (ML) to interpret sensor<br/>
data are very fast and usually quite accurate, their decisions are<br/>
notoriously hard to explain, though huge efforts are currently being<br/>
made to overcome this problem. In contrast, decisions made by<br/>
reasoning about symbolically represented knowledge are in principle<br/>
easy to explain. For example, if the knowledge is represented in (some<br/>
fragment of) first-order logic, and a decision is made based on the result<br/>
of a first-order reasoning process, then one can in principle use a formal<br/>
proof in an appropriate calculus to explain a positive reasoning result,<br/>
and a counter-model to explain a negative one. In practice, however, things<br/>
are not so easy also in the symbolic KR setting. For example, proofs and<br/>
counter-models may be very large, and thus it may be hard to comprehend<br/>
why they demonstrate a positive or negative reasoning result, in particular<br/>
for users that are not experts in logic. Thus, to leverage explainability as<br/>
an advantage of symbolic KR over ML-based approaches, one needs to ensure<br/>
that explanations can really be given in a way that is comprehensible to<br/>
different classes of users (from knowledge engineers to laypersons).</div>
<div> </div>
<div>The problem of explaining why a consequence does or does not follow from a<br/>
given set of axioms has been considered for full first-order theorem proving<br/>
since at least 40 years, but there usually with mathematicians as users in<br/>
mind. In knowledge representation and reasoning, efforts in this direction<br/>
are more recent, and were usually restricted to sub-areas of KR such as AI<br/>
planning and description logics. The purpose of this workshop is to bring<br/>
together researchers from different sub-areas of KR and automated deduction<br/>
that are working on explainability in their respective fields, with the goal<br/>
of exchanging experiences and approaches.</div>
<div> </div>
<div> </div>
<div>==================================================<br/>
Keynote Speakers<br/>
==================================================</div>
<div> </div>
<div>Joseph Y. Halpern<br/>
<a href="https://www.cs.cornell.edu/home/halpern/" target="_blank">https://www.cs.cornell.edu/home/halpern/</a></div>
<div> </div>
<div>Sheila McIlraith<br/>
<a href="https://www.cs.toronto.edu/~sheila/" target="_blank">https://www.cs.toronto.edu/~sheila/</a></div>
<div> </div>
<div><br/>
==================================================<br/>
Topics of Interest<br/>
==================================================</div>
<div> </div>
<div>We invite contributions related to explaining the outcome of KR methods</div>
<div>or using KR methods to explain the outcome of other approaches.</div>
<div>Examples include</div>
<div> </div>
<div>* AI planning<br/>
* Answer set programming<br/>
* Argumentation frameworks<br/>
* Automated reasoning<br/>
* Causal reasoning<br/>
* Constraint programming<br/>
* Description logics<br/>
* Non-monotonic reasoning<br/>
* Probabilistic representation and reasoning</div>
<div> </div>
<div><br/>
==================================================<br/>
Important Dates<br/>
==================================================</div>
<div> </div>
<div>Paper submission deadline: July 2, 2021</div>
<div> </div>
<div>Notification: August 6, 2021</div>
<div> </div>
<div>Workshop dates: November 6-8, 2021 (exact date TBD)</div>
<div> </div>
<div> </div>
<div>==================================================<br/>
Author Guidelines and Submission Information<br/>
==================================================</div>
<div> </div>
<div>We invite abstracts of 2-5 pages (excluding references) on topics related to explanation<br/>
in logic-based KR. The papers should be formatted in Springer LNCS Style and can<br/>
be submitted via EasyChair at</div>
<div> </div>
<div><a href="https://easychair.org/conferences/?conf=xlokr21" target="_blank">https://easychair.org/conferences/?conf=xlokr21</a>.</div>
<div> </div>
<div>The workshop will have informal proceedings, and thus, in addition to new work,<br/>
we also welcome papers covering results that have recently been published or will be<br/>
published at other venues.</div>
<div> </div>
<div><br/>
==================================================<br/>
Remote Participation due to COVID-19 Pandemic<br/>
==================================================</div>
<div> </div>
<div>We understand that the global public health situation may make it difficult or<br/>
impossible for some, if not all, participants to travel to Hanoi. For this<br/>
reason, we commit to allowing authors of accepted papers to present virtually<br/>
and will work hard to enable the best possible experience for all conference<br/>
participants.</div>
</div></div></body></html>