HOME   CALL FOR TASKS CALL FOR INTEREST Workshop Submission Workshop PROGRAM NEWS   REGISTRATION TASKS (short list) TASKS (with rankings) DATA   [41 available]
LRE special issue  
CONTACTS  
 

SemEval-2

Evaluation Exercises on Semantic Evaluation - ACL SigLex event

2010

TASKS

Area:  
Available tasks:   2 

#2  Cross-Lingual Lexical Substitution 

Description

The goal of this task is to provide a framework for the evaluation of systems for cross-lingual lexical substitution. Given a paragraph and a target word, the goal is to provide several correct translations for that word in a given language, with the constraint that the translations fit the given context in the source language. This is a follow-up of the English lexical substitution task from SemEval-2007 (McCarthy and Navigli, 2007), but this time the task is cross-lingual.

While there are connections between this task and the task of automatic machine translation, there are several major differences. First, cross-lingual lexical substitution targets one word at a time, rather than an entire sentence as machine translation does. Second, in cross-lingual lexical substitution we seek as many good translations as possible for the given target word, as opposed to just one translation, which is the typical output of machine translation. There are also connections between this task and a word sense disambiguation task which uses distinctions in translations for word senses (Resnik and Yarowsky, 1997) however in this task we do not restrict the translations to those in a specific parallel corpus; the annotators and systems are free to choose the translations from any available resource. Also, we do not assume a fixed grouping of translations to form "senses" and so it is possible that any token instance of a word may have translations in common with other token instances that are not themselves directly related.

Given a paragraph and a target word, the task is to provide several correct translations for that word in a given language. We will use English as the source language and Spanish as the target language.

References


Organizers: Rada Mihalcea (University of North Texas), Diana McCarthy (University of Sussex), Ravi Sinha (University of North Texas)
Web Site: http://lit.csci.unt.edu/index.php/Semeval_2010

[ Ranking]

Timeline:
  • Test data availability: 1 March - 2 April , 2010
  • Result submission deadline: within 7 days after downloading the *test* data.
  • Closing competition for this task: 2 April

#3  Cross-Lingual Word Sense Disambiguation 

Description
There is a general feeling in the WSD community that WSD should not be considered as an isolated research task, but should be integrated in real NLP applications such as Machine translation or multilingual IR. Using translations from a corpus instead of human defined (e.g. WordNet) sense labels, makes it easier to integrate WSD in multilingual applications, solves the granularity problem that might be task-dependent as well, is language-independent and can be a valid alternative for languages that lack sufficient sense-inventories and sense-tagged corpora.

We propose an Unsupervised Word Sense Disambiguation task for English nouns by means of parallel corpora. The sense label is composed of translations in the different languages and the sense inventory is built up by three annotators on the basis of the Europarl parallel corpus by means of a concordance tool. All translations (above a predefined frequency threshold) of a polysemous word are grouped into clusters/"senses" of that given word.

Languages: English - Dutch, French, German, Italian, Spanish

Subtasks:

1. Bilingual Evaluation (English - Language X)

Example:
[English] ... equivalent to giving fish to people living on the [bank] of the river ...

Sense Label = {oever/dijk} [Dutch]
Sense Label = {rives/rivage/bord/bords} [French]
Sense Label = {Ufer} [German]
Sense Label = {riva} [Italian]
Sense Label = {orilla} [Spanish]

2. Multi-lingual Evaluation (English - all target languages)

Example:
... living on the [bank] of the river ...
Sense Label = {oever/dijk, rives/rivage/bord/bords, Ufer, riva, orilla}

Resources

As the task is formulated as an unsupervised WSD task, we will not annotate any training material. Participants can use the Europarl corpus that is freely available and that will be used for building up the sense inventory.
For the test data, native speakers will decide on the correct translation cluster(s) for each test sentence and give their top-3 translations from the predefined list of Europarl translations, in order to assign weights to the translations from the answer clusters for that test sentence.
Participants will receive manually annotated development and test data:
  • Development/sample data: 5 polysemous English nouns, each with 20 example instances
  • Test data: 20 polysemous English nouns (selected from the test data as used in the lexical substitution task), each with 50 test instances

Evaluation

The evaluation will be done using precision and recall. We will perform both a "best result" evaluation (the first translation returned by a system) and a more relaxed evaluation for the "top ten" results (the first ten translations returned by a system).

Organizers: Els Lefever and Veronique Hoste (University College Ghent, Belgium)
Web Site: http://webs.hogent.be/~elef464/lt3_SemEval.html

[ Ranking]

Timeline:
  • Test data availability: 22 March - 25 March , 2010
  • Result submission deadline: within 4 days after downloading the *test* data.

© 2008 FBK-irst  |  internal area