TY - GEN
T1 - Answer validation using textual entailment
AU - Pakray, Partha
AU - Gelbukh, Alexander
AU - Bandyopadhyay, Sivaji
PY - 2011
Y1 - 2011
N2 - We present an Answer Validation System (AV) based on Textual Entailment and Question Answering. The important features used to develop the AV system are Lexical Textual Entailment, Named Entity Recognition, Question-Answer type analysis, chunk boundary module and syntactic similarity module. The proposed AV system is rule based. We first combine the question and the answer into Hypothesis (H) and the Supporting Text as Text (T) to identify the entailment relation as either "VALIDATED" or "REJECTED". The important features used for the lexical Textual Entailment module in the present system are: WordNet based unigram match, bigram match and skip-gram. In the syntactic similarity module, the important features used are: subject-subject comparison, subject-verb comparison, object-verb comparison and cross subject-verb comparison. The results obtained from the answer validation modules are integrated using a voting technique. For training purpose, we used the AVE 2008 development set. Evaluation scores obtained on the AVE 2008 test set show 66% precision and 65% F-Score for "VALIDATED" decision.
AB - We present an Answer Validation System (AV) based on Textual Entailment and Question Answering. The important features used to develop the AV system are Lexical Textual Entailment, Named Entity Recognition, Question-Answer type analysis, chunk boundary module and syntactic similarity module. The proposed AV system is rule based. We first combine the question and the answer into Hypothesis (H) and the Supporting Text as Text (T) to identify the entailment relation as either "VALIDATED" or "REJECTED". The important features used for the lexical Textual Entailment module in the present system are: WordNet based unigram match, bigram match and skip-gram. In the syntactic similarity module, the important features used are: subject-subject comparison, subject-verb comparison, object-verb comparison and cross subject-verb comparison. The results obtained from the answer validation modules are integrated using a voting technique. For training purpose, we used the AVE 2008 development set. Evaluation scores obtained on the AVE 2008 test set show 66% precision and 65% F-Score for "VALIDATED" decision.
KW - Answer Validation Exercise (AVE)
KW - Chunk Boundary
KW - Named Entity (NE)
KW - Question Type
KW - Syntactic Similarity
KW - Textual Entailment (TE)
UR - http://www.scopus.com/inward/record.url?scp=79952272364&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-19437-5_29
DO - 10.1007/978-3-642-19437-5_29
M3 - Contribución a la conferencia
SN - 9783642194368
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 353
EP - 364
BT - Computational Linguistics and Intelligent Text Processing - 12th International Conference, CICLing 2011, Proceedings
T2 - 12th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing 2011
Y2 - 20 February 2011 through 26 February 2011
ER -