<html><head></head><body><div style="font-family: Verdana;font-size: 12.0px;"><div style="font-family: Verdana;font-size: 12.0px;">
<div>CALL FOR PAPERS<br/>
<br/>
2nd Workshop on Explainable Logic-Based Knowledge Representation (XLoKR 2021)<br/>
co-located with KR 2021<br/>
<a href="https://deref-gmx.net/mail/client/8rJH02Jmnk0/dereferrer/?redirectUrl=https%3A%2F%2Fkr2021.kbsg.rwth-aachen.de" target="_blank">https://kr2021.kbsg.rwth-aachen.de</a><br/>
<br/>
<br/>
6-8 November 2021 (exact date(s) TBD), Hanoi, Vietnam (virtually)<br/>
<br/>
<a href="https://deref-gmx.net/mail/client/4Yg9JQicgvc/dereferrer/?redirectUrl=https%3A%2F%2Fxlokr21.ai.vub.ac.be%2F" target="_blank">https://xlokr21.ai.vub.ac.be/</a><br/>
<br/>
Embedded or cyber-physical systems that interact autonomously with the<br/>
real world, or with users they are supposed to support, must continuously<br/>
make decisions based on sensor data, user input, knowledge they have<br/>
acquired during runtime as well as knowledge provided during design-time.<br/>
To make the behavior of such systems comprehensible, they need to be<br/>
able to explain their decisions to the user or, after something has<br/>
gone wrong, to an accident investigator.<br/>
<br/>
While systems that use Machine Learning (ML) to interpret sensor<br/>
data are very fast and usually quite accurate, their decisions are<br/>
notoriously hard to explain, though huge efforts are currently being<br/>
made to overcome this problem. In contrast, decisions made by<br/>
reasoning about symbolically represented knowledge are in principle<br/>
easy to explain. For example, if the knowledge is represented in (some<br/>
fragment of) first-order logic, and a decision is made based on the result<br/>
of a first-order reasoning process, then one can in principle use a formal<br/>
proof in an appropriate calculus to explain a positive reasoning result,<br/>
and a counter-model to explain a negative one. In practice, however, things<br/>
are not so easy also in the symbolic KR setting. For example, proofs and<br/>
counter-models may be very large, and thus it may be hard to comprehend<br/>
why they demonstrate a positive or negative reasoning result, in particular<br/>
for users that are not experts in logic. Thus, to leverage explainability as<br/>
an advantage of symbolic KR over ML-based approaches, one needs to ensure<br/>
that explanations can really be given in a way that is comprehensible to<br/>
different classes of users (from knowledge engineers to laypersons).<br/>
<br/>
The problem of explaining why a consequence does or does not follow from a<br/>
given set of axioms has been considered for full first-order theorem proving<br/>
since at least 40 years, but there usually with mathematicians as users in<br/>
mind. In knowledge representation and reasoning, efforts in this direction<br/>
are more recent, and were usually restricted to sub-areas of KR such as AI<br/>
planning and description logics. The purpose of this workshop is to bring<br/>
together researchers from different sub-areas of KR and automated deduction<br/>
that are working on explainability in their respective fields, with the goal<br/>
of exchanging experiences and approaches. A non-exhaustive list of areas<br/>
to be covered by the workshop are the following:<br/>
* AI planning<br/>
* Answer set programming<br/>
* Argumentation frameworks<br/>
* Automated reasoning<br/>
* Causal reasoning<br/>
* Constraint programming<br/>
* Description logics<br/>
* Non-monotonic reasoning<br/>
* Probabilistic representation and reasoning<br/>
<br/>
<br/>
<br/>
** IMPORTANT DATES **<br/>
<br/>
Paper submission deadline: July 2, 2021<br/>
<br/>
Notification: August 6, 2021<br/>
<br/>
Workshop dates: November 6-8, 2021 (exact date TBD)<br/>
<br/>
<br/>
**AUTHOR GUIDELINES AND SUBMISSION INFORMATION**<br/>
<br/>
<br/>
Researchers interested in participating in the workshop should submit extended<br/>
abstracts of 2-5 pages on topics related to explanation in logic-based KR.<br/>
The papers should be formatted in Springer LNCS Style and must be submitted<br/>
via EasyChair <a href="https://deref-gmx.net/mail/client/HSFjZ1TPdio/dereferrer/?redirectUrl=https%3A%2F%2Feasychair.org%2Fconferences%2F%3Fconf%3Dxlokr21" target="_blank">https://easychair.org/conferences/?conf=xlokr21</a>.<br/>
<br/>
The workshop will have informal proceedings, and thus, in addition to new work,<br/>
also papers covering results that have recently been published or will be<br/>
published at other venues are welcome.<br/>
<br/>
<br/>
** REMOTE PARTICIPATION DUE TO COVID-19 PANDEMIC **<br/>
<br/>
We understand that the global public health situation may make it difficult or<br/>
impossible for some, if not all, participants to travel to Hanoi. For this<br/>
reason, we commit to allowing authors of accepted papers to present virtually<br/>
and will work hard to enable the best possible experience for all conference<br/>
participants.</div>
</div></div></body></html>