HOME   CALL FOR TASKS CALL FOR INTEREST Workshop Submission Workshop PROGRAM NEWS   REGISTRATION TASKS (short list) TASKS (with rankings) DATA   [41 available]
LRE special issue  


Evaluation Exercises on Semantic Evaluation - ACL SigLex event



Available tasks:   1 

#18  Disambiguating Sentiment Ambiguous Adjectives 

Some adjectives are neutral in sentiment polarity out of context, but they show positive, neutral or negative meaning within specific context. Such words can be called dynamic sentiment ambiguous adjectives. For instance, “价格高|the price is high” indicates negative meaning, while “质量高|the quality is high” has positive connotation. Disambiguating sentiment ambiguous adjectives is an interesting task, which is an interaction between word sense disambiguation and sentiment analysis. However in the previous works, sentiment ambiguous words have not been tackled in the field of WSD, and are also discarded crudely by most of the researches concerning sentiment analysis.

This task aims to create a benchmark dataset for disambiguating dynamic sentiment ambiguous adjectives. The sentiment ambiguous words are pervasive in many languages. In this task we concentrate on Chinese, but we think, the disambiguating techniques should be language-independent. Together 14 dynamic sentiment ambiguous adjectives are selected, which are all high-frequency words in Mandarin Chinese. They are: 大|big, 小|small, 多|many, 少|few, 高|high, 低|low, 厚|thick, 薄|thin, 深|deep, 浅|shallow, 重|heavy, 轻|light, 巨大|huge, 重大|grave.

The dataset contains two parts. Some sentences containing these target adjectives will be extracted from Chinese Gigaword (LDC corpus: LDC2005T14). And the other sentences will be gathered through the search engine like Google. Firstly these sentences will be automatically segmented and POS-tagged. And then the ambiguous adjectives are manually annotated with the correct sentiment polarity within the sentence context. Two human annotators will annotate the sentences double blindly. The third annotator will check the annotation.

This task will be carried out in an unsupervised setting, and consequently no training data will be provided. All the data of about 4,000 sentences will be provides as the test set. Evaluation will be performed in terms of the usual precision, recall and F1 scores.

Organizers: Yunfang Wu, Peng Jin, Miaomiao Wen and Shiwen Yu (Peking University, Beijing, China)
Web Site:

[ Ranking]


  • Test data release: March 23, 2010
  • Result submission deadline: postponed at March 27, 2010, 4 days after downloading the test data
  • Organizers send the test results: April 2, 2010

© 2008 FBK-irst  |  internal area