[DL] 1st Call for Semantic Web Challenges Proposals - 13th Extended Semantic Web Conference 2016

Heiko Paulheim heiko at informatik.uni-mannheim.de
Tue Oct 6 07:55:32 CEST 2015


Apologies for cross-posting
========================================================
                     13th ESWC 2016
           http://2016.eswc-conferences.org/
        Call for Semantic Web Challenges Proposals
========================================================

OVERVIEW
========================================================
The ESWC organizers are glad to announce that the Challenges Track will 
be included again in the program of ESWC 2016! Five challenges were held 
last year [1] and allowed the conference to attract a broader audience 
beyond the Semantic Web community, also spanning across disciplines such 
as Recommender Systems or Knowledge Extraction. This year, a call for 
challenges is open in order to allow the selection of challenges to be 
held at the conference.

The purpose of challenges is to showcase the maturity of state of the 
art methods and tools on tasks common to the Semantic Web community and 
adjacent disciplines, in a controlled setting involving rigorous evaluation.

Semantic Web Challenges are an official track of the conference, 
ensuring significant visibility for the challenges as well as 
participants. Challenge participants are asked to present their 
submissions as well as provide a paper describing their work. These 
papers must undergo a peer-review by experts relevant to the challenge 
task, and will be published in the official ESWC2016 Satellite Events 
proceedings.

Next to the publication of proceedings challenges at ESWC2016 benefit 
from high visibility and direct access to the ESWC audience and community.


CHALLENGE PROPOSALS
========================================================
Challenge organizers are encouraged to submit proposals adhering to the 
following criteria:

- At least one task involving semantics in data. The task(s) should be 
well defined and related to the Semantic Web but not necessarily 
confined to it. It is highly encouraged to consider tasks which involve 
other, highly related communities, such as NLP, Recommender Systems, 
Machine Learning or Information Retrieval. If multiple tasks are 
provided the tasks should be independent so that participants may choose 
which to participate in.

- Task descriptions are likely to be interesting to a wider audience. We 
encourage the challenge organizers to propose at least one basic task 
that can be addressed by a larger audience from their community. 
Engaging with your challenge audience and obtaining feedback from your 
target group on the task design might be helpful for shaping the task 
and ensuring sufficient number of participants.

- Clear and rigorous definition of the tasks. For each task, you should 
define a deterministic and objective way to verify if the goal of the 
task has been achieved, and to which extent it has been achieved (if 
applicable). The best way is usually to provide detailed examples of 
input data and expected output. The examples must cover all the possible 
situations that can occur while performing the task, and should leave no 
place to ambiguity about whether in a particular case the task is done 
or not.

- Valid dataset (if applicable). If accepted, you should find or create 
a dataset that will be used for the challenge. In any case, you must 
specify the provenance of the dataset (if it contains human annotation – 
how were those obtained). You must make sure you have the right to 
use/publish this dataset and clearly state the license for its use 
within the challenge. The dataset should be split in two parts – the 
training part, and the evaluation part. The training part contains the 
data, and the results that should be obtained when performing the task. 
As for the evaluation part, you should only publish the data, and make 
sure that the correct results have not previously been available to the 
participants. When proposing the challenge you must provide details on 
the dataset and on the way it is/will be created – the dataset can be 
made available later.

- Challenge Committee: Composed of at least 3 respected researchers with 
experience in the tasks of the challenge. They help evaluate the papers 
submitted by the participants, and also validate the evaluation procedure.

- Evaluation metrics and procedure. For each task there must be a number 
of objective criteria (metrics), e.g. precision and recall. The 
evaluation procedure and the way in which the metrics will be calculated 
must be clearly specified and made transparent to participants.

Among the selection criteria for choosing the supported challenges are:
* Potential number to interested participants
* Rigor and transparency of the evaluation procedure
* Relevance for the Semantic Web community
* Endorsements (from researchers working on the task, from industry 
players interested in results, from future participants)


IMPORTANT DATES
========================================================
* Challenges proposals due Friday November 20, 2015 - 23:59 Hawaii Time
* Challenges chosen/merged – notification to organizers sent Friday 
December 4, 2015
* Training data ready and challenges Calls for Papers sent Friday 
January 15th, 2016
* Challenge papers submission deadline – Friday March 11th, 2016
* Challenge paper reviews due – Tuesday April 5th, 2016
* Notifications sent to participants and invitations to submit task 
results – Friday April 8th, 2016
* Test data (and other participation tools) published – Friday April 
8th, 2016
* Camera ready papers due -Sunday April 24th, 2016
* Submission of challenge results – free choice of organizers
* Proclamation of winners – During ESWC2016 closing ceremony



SUBMISSION DETAILS
========================================================
The challenges proposals should contain at least the following elements:
* A summary description of the challenge and tasks
* How the training/testing data will be built and/or procured
* The evaluation methodology to be used, including clear evaluation 
criteria and the exact way in which they will be measured. Who will 
perform the evaluation and how will transparency be assured?
* The anticipated availability of the necessary resources to the 
participants
* The resources required to prepare the tasks (computation and 
annotation time, costs of annotations, etc)
* The list of challenge committee members who will evaluate the 
challenge papers (please indicate which of the listed members already 
accepted the role)

In case of doubt, feel free to send us your challenge proposal drafts as 
early as possible – the challenges chairs will provide you with feedback 
and answers to questions you may have.

Please submit proposals via Easychair 
(https://easychair.org/conferences/?conf=eswc2016challenges) as soon as 
possible and no later than *20 November 2015*.

For any questions, please do not hesitate to get in touch with the 
ESWC2016 challenge chairs:
* Stefan Dietze, L3S Research Center, Germany (dietze at l3s.de)
* Anna Tordai, Elsevier, Netherlands (a.tordai at elsevier.com)


[1] http://2015.eswc-conferences.org/call-challenges



-- 
Prof. Dr. Heiko Paulheim
Data and Web Science Group
University of Mannheim
Phone: +49 621 181 2646
B6, 26, Room C1.08
D-68159 Mannheim

Mail: heiko at informatik.uni-mannheim.de
Web: www.heikopaulheim.com




More information about the dl mailing list