HOME   CALL FOR TASKS CALL FOR INTEREST Workshop Submission Workshop PROGRAM NEWS   REGISTRATION TASKS (short list) TASKS (with rankings) DATA   [41 available]
LRE special issue  


Evaluation Exercises on Semantic Evaluation - ACL SigLex event



Available tasks:   1 

#7  Argument Selection and Coercion 


Task Description

This task involves identifying the compositional operations involved in argument selection. Most annotation schemes to date encoding propositional or predicative content have focused on the identification of the predicate type, the argument extent, and the semantic role (or label) assigned to that argument by the predicate. In contrast, this task attempts to capture the "compositional history" of the argument selection relative to the predicate. In particular, this task attempts to identify the operations of type adjustment induced by a predicate over its arguments when they do not match its selectional properties. The task is defined as follows: for each argument of a predicate, identify whether the entity in that argument position satisfies the type expected by the predicate. If not, then one needs to identify how the entity in that position satisfies the typing expected by the predicate; that is, to identify the source and target types in a type-shifting (or coercion) operation. The possible relations between the predicate and a given argument will, for this task, be restricted to selection and coercion. In selection, the argument NP satisfies the typing requirements of the predicate. For example, in the sentence "The child threw the ball", the object NP "the ball" directly satisfies the type expected by the predicate, Physical Object. If this is not the case, then a coercion has occurred. For example, in the sentence "The White House denied this statement.", the type expected in subject position by the predicate is Human, but the surface NP is typed as Location. The task is to identify both the type mismatch and the type shift; namely Location -> Human.

Resources and Corpus Development

The following methodology will be followed in corpus creation: (1) A set of selection contexts will be chosen; (2) A set of sentences will be randomly selected for each chosen context; (3) The target noun phrase will be identified in each sentence, and a composition type determined in each case; (4) In cases of coercion, the source and target types for the semantic head of each relevant noun phrase will be identified. We will perform double annotation and adjudication over the corpus.

Evaluation Methodology

Precision and recall will be used as evaluation metrics. A scoring program will be supplied for participants. Two subtasks will be evaluated separately: (1) identifying the argument type and (2) identifying the compositional operation (i.e. selection vs. coercion).


J. Pustejovsky, A. Rumshisky, J. L. Moszkowicz, and O. Batiukova. 2009. Glml: Annotating argument selection and coercion. IWCS-8.

Organizers: James Pustejovsky, Nicoletta Calzolari, Anna Rumshisky, Jessica Moszkowicz, Elisabetta Jezek, Valeria Quochi, Olga Batiukova
Web Site: http://asc-task.org/


  • 11/10/09 - Trial data for English and Italian posted
  • 3/10/10 - Training data for English and Italian released
  • 3/27/10 - Test data for English and Italian released
  • 4/02/10 - Closing competition

© 2008 FBK-irst  |  internal area