|
{ |
|
"paper_id": "W10-0505", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:05:00.621735Z" |
|
}, |
|
"title": "Towards Automatic Question Answering over Social Media by Learning Question Equivalence Patterns", |
|
"authors": [ |
|
{ |
|
"first": "Tianyong", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "City University of Hong Kong City University of Hong Kong Emory University", |
|
"location": { |
|
"addrLine": "81 Tat Chee Avenue 81 Tat Chee Avenue 201 Dowman Drive Kowloon", |
|
"postCode": "30322", |
|
"settlement": "Hong Kong SAR Kowloon, Hong Kong SAR Atlanta", |
|
"country": "Georgia, USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Wenyin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "City University of Hong Kong City University of Hong Kong Emory University", |
|
"location": { |
|
"addrLine": "81 Tat Chee Avenue 81 Tat Chee Avenue 201 Dowman Drive Kowloon", |
|
"postCode": "30322", |
|
"settlement": "Hong Kong SAR Kowloon, Hong Kong SAR Atlanta", |
|
"country": "Georgia, USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Agichtein", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "City University of Hong Kong City University of Hong Kong Emory University", |
|
"location": { |
|
"addrLine": "81 Tat Chee Avenue 81 Tat Chee Avenue 201 Dowman Drive Kowloon", |
|
"postCode": "30322", |
|
"settlement": "Hong Kong SAR Kowloon, Hong Kong SAR Atlanta", |
|
"country": "Georgia, USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Many questions submitted to Collaborative Question Answering (CQA) sites have been answered before. We propose an approach to automatically generating an answer to such questions based on automatically learning to identify \"equivalent\" questions. Our main contribution is an unsupervised method for automatically learning question equivalence patterns from CQA archive data. These patterns can be used to match new questions to their equivalents that have been answered before, and thereby help suggest answers automatically. We experimented with our method approach over a large collection of more than 200,000 real questions drawn from the Yahoo! Answers archive, automatically acquiring over 300 groups of question equivalence patterns. These patterns allow our method to obtain over 66% precision on automatically suggesting answers to new questions, significantly outperforming conventional baseline approaches to question matching.", |
|
"pdf_parse": { |
|
"paper_id": "W10-0505", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Many questions submitted to Collaborative Question Answering (CQA) sites have been answered before. We propose an approach to automatically generating an answer to such questions based on automatically learning to identify \"equivalent\" questions. Our main contribution is an unsupervised method for automatically learning question equivalence patterns from CQA archive data. These patterns can be used to match new questions to their equivalents that have been answered before, and thereby help suggest answers automatically. We experimented with our method approach over a large collection of more than 200,000 real questions drawn from the Yahoo! Answers archive, automatically acquiring over 300 groups of question equivalence patterns. These patterns allow our method to obtain over 66% precision on automatically suggesting answers to new questions, significantly outperforming conventional baseline approaches to question matching.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Social media in general exhibit a rich variety of information sources. Question answering (QA) has been particularly amenable to social media, as it allows a potentially more effective alternative to web search by directly connecting users with the information needs to users willing to share the information directly (Bian, 2008) . One of the useful by-products of this process is the resulting large archives of datawhich in turn could be good sources of information for automatic question answering. Yahoo! Answers, as a collaborative QA system (CQA), has acquired an archive of more than 40 Million Questions and 500 Million an-swers, as of 2008 estimates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 318, |
|
"end": 330, |
|
"text": "(Bian, 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main premise of this paper is that there are many questions that are syntactically different while semantically similar. The key problem is how to identify such question groups. Our method is based on the key observation that when the best non-trivial answers chosen by asker in the same domain are exactly the same, the corresponding questions are semantically similar. Based on this observation, we propose answering new method for learning question equivalence patterns from CQA archives. First, we retrieve \"equivalent\" question groups from a large dataset by grouping them by the text of the best answers (as chosen by the askers). The equivalence patterns are then generated by learning common syntactic and lexical patterns for each group. To avoid generating patterns from questions that were grouped together by chance, we estimate the group's topic diversity to filter the candidate patterns. These equivalence patterns are then compared against newly submitted questions. In case of a match, the new question can be answered by proposing the \"best\" answer from a previously answered equivalent question.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We performed large-scale experiments over a more than 200,000 questions from Yahoo! Answers. Our method generated over 900 equivalence patterns in 339 groups and allows to correctly suggest an answer to a new question, roughly 70% of the timeoutperforming conventional similaritybased baselines for answer suggestion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Moreover, for the newly submitted questions, our method can identify equivalent questions and generate equivalent patterns incrementally, which can greatly improve the feasibility of our method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While most questions that share exactly the same \"best\" answer are indeed semantically equivalent, some may share the same answer by chance. To ----------------1 Work done while visiting Emory University filter out such cases, we propose an estimate of Topical Diversity (TD), calculated based on the shared topics for all pairs of questions in the group. If the diversity is larger than a threshold, the questions in this group are considered not equivalent, and no patterns are generated. To calculate this measure, we consider as topics the \"notional words\" (NW) in the question, which are the head nouns and the heads of verb phrases recognized by the OpenNLP parser. Using these words as \"topics\", TD for a group of questions G is calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Equivalence Patterns", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2211\u2211 \u2212 = = < \u2212 \u00d7 \u2212 = 1 1 2 ) ( ) 1 ( ) 1 ( 2 ) ( n i n j j i j i j i Q Q Q Q n n G TD U I", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Equivalence Patterns", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where Q i and Q j are the notional words in each question in within group G with n questions total. Based on the question groups, we can generate equivalence patterns to extend the matching coveragethus retrieving similar questions with different syntactic structure. OpenNLP is used to generate the basic syntactic structures by phrase chunking. After that, only the chunks which contain NWs are analyzed to acquire the phrase labels as the syntactic pattern. Table 1 shows an example of a generated pattern. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 461, |
|
"end": 468, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Equivalence Patterns", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our dataset is 216,563 questions and 2,044,296 answers crawled from Yahoo! Answers. From this we acquired 833 groups of similar questions distributed in 65 categories. After filtering by topical diversity, 339 groups remain to generate equivalence patterns. These groups contain 979 questions, with, 2.89 questions per group on average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "After that, we split our data into 413 questions for training (200 groups) and 566 questions, with randomly selected an additional 10,000 questions, for testing (the remainder) to compare three variants of our system Equivalence patterns only (EP), Notional words only (NW), and the weighted combination (EP+NW). To match question, both equivalence patterns and notional words are used with different weights. The weight of pattern, disjoint NW and shared NW are 0.7, 0.4 and 0.6 after parameter training. We then compare the variants and results are reported in Using EP+NW as our best method, we now compare it to traditional similarity-based methods on whole question set. TF*IDF-based vector space model (TFIDF), and a more highly tuned Cosine model (that only keeps the same \"notional words\" filtered by phrase chunking) are used as baselines. Figure 3 reports the results, which indicate that EP+NW, outperforms both Cosine and TFIDF methods on all metrics. Our work expands on previous significant efforts on CQA retrieval (e.g., Bian et al., Jeon et al., Kosseim et al.) . Our contribution is a new unsupervised and effective method for learning question equivalence patterns that exploits the structure of the collaborative question answering archivesan important part of social media.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1037, |
|
"end": 1078, |
|
"text": "Bian et al., Jeon et al., Kosseim et al.)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 849, |
|
"end": 857, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ReferencesBian, J.,Liu, Y., Agichtein, E., and Zha, H. 2008. Finding the right facts in the crowd: factoid question answering over social media. WWW. Jeon, J., Croft, B.W. and Lee, J.H. 2005. Finding similar questions in large question and answer archives Export Find Similar. CIKM. Kosseim, L. and Yousefi, J. 2008.Improving the performance of question answering with semantically equivalent answer patterns, Journal of Data & Knowledge Engineering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": {}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "What was the first book you discovered that made you think reading wasn't a complete waste of time? Pattern: [NP]-[VP]-[NP]-[NP]-[VP]-[VP]-[NP]-[VP]-\u2026 NW: (Disjoint: read waste time) (Shared: book think) Question: What book do you think everyone should have at home? Pattern: [NP]-[NP]-[VP]-[NP]-[VP]-[PP]-[NP] NW: (Disjoint: do everyone have home) (Shared: book think)", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Performance of EP+NW vs. baselines", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "A group of equivalence patterns", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td/><td>Recall</td><td>Precision</td><td>F1 score</td></tr><tr><td>EP</td><td>0.811</td><td>0.385</td><td>0.522</td></tr><tr><td>NW</td><td>0.378</td><td>0.559</td><td>0.451</td></tr><tr><td>EP+NW</td><td>0.726</td><td>0.663</td><td>0.693</td></tr><tr><td colspan=\"4\">Table 2. Performance comparison of three variants</td></tr></table>", |
|
"type_str": "table", |
|
"text": ", showing that EP+NW achieves the highest performance.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |