<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">* Please accept our apologies if you receive multiple copies of this call *</div><div class=""><br class=""></div><div class="">Call for Papers:</div><div class="">======================================================================</div><div class="">Special Issue of the International Journal of Approximate Reasoning on</div><div class=""> "Defeasible and Ampliative Reasoning"</div><div class="">======================================================================</div><div class=""><br class=""></div><div class="">Classical reasoning is not flexible enough when directly applied to the formalization of certain nuances of decision making as done by humans. These involve different kinds of reasoning such as reasoning with uncertainty, exceptions, similarity, vagueness, incomplete or contradictory information and many others.</div><div class=""><br class=""></div><div class="">It turns out that everyday reasoning usually shows the two salient intertwined aspects below:</div><div class=""><br class=""></div><div class="">* Ampliative aspect: augmenting the underlying reasoning by allowing more conclusions. In practical contexts, this amounts to the ability to make inferences that venture beyond the scope of the premises, somehow in an unsound but justifiable way. Prominent examples are (i) default reasoning: jumping to conclusions deemed as plausible 'by default', i.e., in the absence of information to the contrary, like applying negation as failure or adopting the closed-world assumption; (ii) inductive and abductive reasoning: taking chances in drawing conclusions that implicitly call for further scrutiny or tests by empirical observations, like in making inductive hypotheses in scientific theories or finding abductive explanations in forensics, and (iii) analogical reasoning: extrapolating from very few examples (in the worst case only one) on the basis of observable similarities or dissimilarities.</div><div class=""><br class=""></div><div class="">* Defeasible aspect: curtailing the underlying reasoning by either disregarding or disallowing some conclusions that somehow ought not to be sanctioned. In practice, this amounts to the ability to backtrack one's conclusions or to admit exceptions in reasoning. Some examples of this are (i) retractive reasoning: withdrawing conclusions that have already been derived, like in belief contraction or in negotiation, and (ii) preemptive reasoning: preventing or blocking the inference of some conclusions by disallowing their derivation in the first place, like in dealing with exceptional cases in multiple inheritance networks and in regulatory systems.</div><div class=""><br class=""></div><div class="">Several efforts have been put into the study and definition of formalisms within which the aforementioned aspects of everyday reasoning could adequately be captured at different levels. Despite the progress that has been achieved, a large avenue remains open for exploration. Indeed, the literature on non-monotonic reasoning has focused almost exclusively on defeasibility of argument forms, whereas belief revision paradigms are restricted to an underlying classical (Tarskian) consequence relation. Moreover, even if some of the issues related to uncertainty in reasoning have been studied using probabilistic approaches and statistical methods, their integration with qualitative frameworks remain a challenge. Finally, well-established approaches are largely based on propositional languages (poor expressiveness) or haunted by the undecidability of full first-order logic. Modern applications require formalisms with a good balance between expressive power and computational complexity in order to be also considered as good candidates for eXplainable Artificial Intelligence (XAI).</div><div class=""><br class=""></div><div class="">This special issue aims at bringing together work on defeasible and ampliative reasoning from the perspective of artificial intelligence, cognitive sciences, philosophy and related disciplines in a multi-disciplinary way, thereby consolidating the mission of the DARe workshop series.</div><div class=""><br class=""></div><div class="">-- Topics of interest --</div><div class=""><br class=""></div><div class="">Submissions are welcome on topics relevant to defeasible and ampliative reasoning and that include but are not limited to:</div><div class=""><br class=""></div><div class="">- Abductive and inductive reasoning</div><div class="">- Explanation finding, diagnosis and causal reasoning</div><div class="">- Inconsistency handling and exception-tolerant reasoning</div><div class="">- Decision-making under uncertainty and incomplete information</div><div class="">- Default reasoning, non-monotonic reasoning, non-monotonic logics, conditional logics</div><div class="">- Specific instances and variations of ampliative and defeasible reasoning</div><div class="">- Probabilistic and statistical approaches to reasoning</div><div class="">- Vagueness, rough sets, granularity and fuzzy-logics</div><div class="">- Philosophical foundations of defeasibility</div><div class="">- Empirical studies of reasoning</div><div class="">- Relationship with cognition and language</div><div class="">- Contextual reasoning</div><div class="">- Preference-based reasoning</div><div class="">- Analogical reasoning</div><div class="">- Similarity-based reasoning</div><div class="">- Belief dynamics and merging</div><div class="">- Argumentation theory, negotiation and conflict resolution</div><div class="">- Heuristic and approximate reasoning</div><div class="">- Defeasible normative systems</div><div class="">- Reasoning about actions and change</div><div class="">- Reasoning about knowledge and belief, epistemic and doxastic logics</div><div class="">- Ampliative and defeasible temporal and spatial reasoning</div><div class="">- Computational aspects of reasoning with uncertainty</div><div class="">- Implementations and systems</div><div class="">- Applications of uncertainty in reasoning</div><div class=""><br class=""></div><div class="">-- How to submit --</div><div class=""><br class=""></div><div class="">The submission url is: <a href="http://www.evise.com/evise/jrnl/IJA" class="">http://www.evise.com/evise/jrnl/IJA</a></div><div class=""><br class=""></div><div class="">When submitting your manuscript, please select “VSI:DARe special issue” as the article type.</div><div class=""><br class=""></div><div class="">Check the “Help” link on the above url for instructions.</div><div class=""><br class=""></div><div class="">If you have any enquiries, please feel free to contact us at <a href="mailto:dare.to.contact.us@gmail.com" class="">dare.to.contact.us@gmail.com</a></div><div class=""><br class=""></div><div class="">-- Important Dates --</div><div class=""><br class=""></div><div class="">- Submission deadline: 15 February 2018</div><div class="">- Notification: 1 November 2018</div><div class="">- Publication date: 1 January 2019</div><div class=""><br class=""></div><div class="">-- Guest editors --</div><div class=""><br class=""></div><div class="">- Richard Booth, Cardiff University, UK</div><div class="">- Giovanni Casini, University of Luxembourg</div><div class="">- Szymon Klarman, Semantic Integration Ltd., UK</div><div class="">- Gilles Richard, Université Paul Sabatier, France</div><div class="">- Ivan Varzinczak, CRIL, Univ. Artois & CNRS, France</div><div class=""><br class=""></div><div class=""><div class="">--<br class="">Ivan Varzinczak<br class="">CRIL, Univ. Artois & CNRS, France<br class=""><a href="http://member.acm.org/~ijv" class="">http://member.acm.org/~ijv</a></div><div class=""><br class=""></div></div></body></html>