<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
[Apologies if you receive multiple copies]
<div><br>
</div>
<div>*******************************************************************************</div>
<div>CALL FOR PAPERS</div>
<div>*******************************************************************************</div>
<div><br>
</div>
<div>The 3rd Workshop on Explainable Logic-Based Knowledge Representation (XLoKR 2022)</div>
<div>will be held in Haifa, Israel, on July 31, 2022, see</div>
<div><br>
</div>
<div> https://sites.google.com/view/xlokr2022</div>
<div><br>
</div>
<div>It will be co-located with KR 2022 (https://kr2022.cs.tu-dortmund.de/) </div>
<div>at FLoC 2022 (https://www.floc2022.org/).</div>
<div><br>
</div>
<div><br>
</div>
<div>*******************************************************************************</div>
<div>Description</div>
<div>*******************************************************************************</div>
<div><br>
</div>
<div>Embedded or cyber-physical systems that interact autonomously with the </div>
<div>real world, or with users they are supposed to support, must continuously </div>
<div>make decisions based on sensor data, user input, knowledge they have </div>
<div>acquired during runtime as well as knowledge provided during design-time. </div>
<div>To make the behavior of such systems comprehensible, they need to be </div>
<div>able to explain their decisions to the user or, after something has </div>
<div>gone wrong, to an accident investigator.</div>
<div><br>
</div>
<div>While systems that use Machine Learning (ML) to interpret sensor </div>
<div>data are very fast and usually quite accurate, their decisions are </div>
<div>notoriously hard to explain, though huge efforts are currently being</div>
<div>made to overcome this problem. In contrast, decisions made by </div>
<div>reasoning about symbolically represented knowledge are in principle </div>
<div>easy to explain. For example, if the knowledge is represented in (some </div>
<div>fragment of) first-order logic, and a decision is made based on the result </div>
<div>of a first-order reasoning process, then one can in principle use a formal </div>
<div>proof in an appropriate calculus to explain a positive reasoning result, </div>
<div>and a counter-model to explain a negative one. In practice, however, things </div>
<div>are not so easy also in the symbolic KR setting. For example, proofs and </div>
<div>counter-models may be very large, and thus it may be hard to comprehend </div>
<div>why they demonstrate a positive or negative reasoning result, in particular </div>
<div>for users that are not experts in logic. Thus, to leverage explainability as
</div>
<div>an advantage of symbolic KR over ML-based approaches, one needs to ensure </div>
<div>that explanations can really be given in a way that is comprehensible to </div>
<div>different classes of users (from knowledge engineers to laypersons).</div>
<div><br>
</div>
<div>The problem of explaining why a consequence does or does not follow from a </div>
<div>given set of axioms has been considered for full first-order theorem proving
</div>
<div>since at least 40 years, but there usually with mathematicians as users in</div>
<div>mind. In knowledge representation and reasoning, efforts in this direction </div>
<div>are more recent, and were usually restricted to sub-areas of KR such as AI </div>
<div>planning and description logics. The purpose of this workshop is to bring </div>
<div>together researchers from different sub-areas of KR and automated deduction </div>
<div>that are working on explainability in their respective fields, with the goal
</div>
<div>of exchanging experiences and approaches. </div>
<div><br>
</div>
<div><br>
</div>
<div>*******************************************************************************</div>
<div>Topics of Interest</div>
<div>*******************************************************************************</div>
<div><br>
</div>
<div>A non-exhaustive list of areas to be covered by the workshop are the following:</div>
<div>* AI planning</div>
<div>* Answer set programming</div>
<div>* Argumentation frameworks</div>
<div>* Automated reasoning</div>
<div>* Causal reasoning</div>
<div>* Constraint programming</div>
<div>* Description logics</div>
<div>* Non-monotonic reasoning</div>
<div>* Probabilistic representation and reasoning</div>
<div><br>
</div>
<div><br>
</div>
<div>*******************************************************************************</div>
<div>IMPORTANT DATES </div>
<div>*******************************************************************************</div>
<div><br>
</div>
<div>Abstract submission deadline: May 2, 2022</div>
<div>Paper submission deadline: May 9, 2022</div>
<div>Notification: June 9, 2022</div>
<div>Workshop date: July 31, 2022 </div>
<div><br>
</div>
<div><br>
</div>
<div>*******************************************************************************</div>
<div>AUTHOR GUIDELINES AND SUBMISSION INFORMATION</div>
<div>*******************************************************************************</div>
<div><br>
</div>
<div>We invite extended abstracts of 2-5 pages on topics related to explanation in
</div>
<div>logic-based KR. The papers should be formatted in Springer LNCS Style and can
</div>
<div>be submitted via EasyChair:</div>
<div><br>
</div>
<div> https://easychair.org/my/conference?conf=xlokr2022 </div>
<div><br>
</div>
<div>Since the workshop will only have informal proceedings and the main purpose is
</div>
<div>to exchange results, we welcome not only papers covering unpublished results,
</div>
but also previous publications that fall within the scope of the workshop.<br>
</div>
</body>
</html>