<div dir="ltr">CALL FOR PARTICIPATION - ONTOLOGY ALIGNMENT EVALUATION INITIATIVE (OAEI) 2015<br>--Apologies for cross-posting--<br><br>Since 2004, OAEI has been supporting the extensive and rigorous evaluation of ontology matching<br>and instance matching techniques. <br><br>In 2015, OAEI will have the following tracks (<a href="http://oaei.ontologymatching.org/2015/">http://oaei.ontologymatching.org/2015/</a>):<br><br> Benchmark<br><br> Anatomy<br><br> Multifarm<br><br> Interactive Matching (New datasets)<br><br> Large Biomedical Ontologies<br><br> Instance Matching (New datasets)<br><br> Ontology Alignment for Query Answering<br><br><br>NEW DATASETS:<br><br>- Interactive Matching. The Interactive Matching track will include the Conference, Anatomy and LargeBio datasets. The addition of the large ontologies to this track represents new challenges in user interaction optimizations. Moreover, we will also simulate domain experts with variable error rate which reflects a more realistic scenario where a (simulated) user does not necessarily provide always a correct answer. In these scenarios asking a large number of questions to the user may also have a negative impact.<br><br><a href="http://oaei.ontologymatching.org/2015/interactive/index.html">http://oaei.ontologymatching.org/2015/interactive/index.html</a><br><br><br>- Instance Matching. The Instance Matching Track aims at evaluating the performance of matching tools when the goal is to detect the degree of similarity between pairs of items/instances expressed in the form of OWL Aboxes. The track is organized in five independent tasks. To participate to the Instance Matching Track, submit results related to one, more, or even all the expected tasks. Each task is articulated in two tests with different scales (i.e., number of instances to match): i) Sandbox (small scale). It contains two datasets called source and target as well as the set of expected mappings (i.e., reference alignment). ii) Mainbox (medium scale). It contains two datasets called source and target. This test is blind, meaning that the reference alignment is not given to the participants. In both tests, the goal is to discover the matching pairs (i.e., mappings) among the instances in the source dataset and the instances in the target dataset. <br><br><a href="http://islab.di.unimi.it/im_oaei_2015/index.html">http://islab.di.unimi.it/im_oaei_2015/index.html</a><br><br><br>IMPORTANT DATES<br><br> July 10th: datasets available for presceening.<br><br> July 31st: datasets are frozen.<br><br> July 31st to August 31st: participants can send their wrapped system for test runs (note thate in the OAEI 2015 edition we have updated the SEALS client and its tutorial). <br><br> August 31st: participants send final versions of their wrapped tools.<br><br> September 28th: evaluation is executed and results are analyzed.<br><br> October 5th: final paper due.<br><br> October 12th: Ontology matching workshop.<br><br> November 16th: Final version of system papers due (sharp).<br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr">Ernesto Jiménez-Ruiz<br>Research Assistant<br>
Department of Computer Science<br>
University of Oxford<br>Wolfson Building, Parks Road, Oxford OX1 3QD, UK<br><br><a href="http://krono.act.uji.es/people/Ernesto" target="_blank">http://krono.act.uji.es/people/Ernesto</a><br><a href="http://www.cs.ox.ac.uk/people/ernesto.jimenez-ruiz/" target="_blank">http://www.cs.ox.ac.uk/people/ernesto.jimenez-ruiz/</a><br><br><br></div></div>
</div>