{ "paper_id": "W11-0111", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:40:35.344911Z" }, "title": "Acquiring entailment pairs across languages and domains: A data analysis", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Kharagpur", "location": { "country": "India" } }, "email": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e4t Heidelberg", "location": { "settlement": "Heidelberg", "country": "Germany" } }, "email": "pado@cl.uni-heidelberg.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Entailment pairs are sentence pairs of a premise and a hypothesis, where the premise textually entails the hypothesis. Such sentence pairs are important for the development of Textual Entailment systems. In this paper, we take a closer look at a prominent strategy for their automatic acquisition from newspaper corpora, pairing first sentences of articles with their titles. We propose a simple logistic regression model that incorporates and extends this heuristic and investigate its robustness across three languages and three domains. We manage to identify two predictors which predict entailment pairs with a fairly high accuracy across all languages. However, we find that robustness across domains within a language is more difficult to achieve.", "pdf_parse": { "paper_id": "W11-0111", "_pdf_hash": "", "abstract": [ { "text": "Entailment pairs are sentence pairs of a premise and a hypothesis, where the premise textually entails the hypothesis. Such sentence pairs are important for the development of Textual Entailment systems. In this paper, we take a closer look at a prominent strategy for their automatic acquisition from newspaper corpora, pairing first sentences of articles with their titles. We propose a simple logistic regression model that incorporates and extends this heuristic and investigate its robustness across three languages and three domains. We manage to identify two predictors which predict entailment pairs with a fairly high accuracy across all languages. However, we find that robustness across domains within a language is more difficult to achieve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic processing has become a major focus of attention in NLP. However, different applications such as Question Answering, Information Extraction and Machine Translation often adopt very different, task-specific semantic processing strategies. Textual entailment (TE) was introduced by Dagan et al. (2006) as a \"meta-task\" that can subsume a large part of the semantic processing requirements of such applications by providing a generic concept of inference that corresponds to \"common sense\" reasoning patterns. Textual Entailment is defined as a relation between two natural language utterances (a Premise P and a Hypothesis H) that holds if \"a human reading P would infer that H is most likely true\". See, e.g., the ACL \"challenge paper\" by Sammons et al. (2010) for further details.", "cite_spans": [ { "start": 289, "end": 308, "text": "Dagan et al. (2006)", "ref_id": "BIBREF6" }, { "start": 747, "end": 768, "text": "Sammons et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The successive TE workshops that have taken place yearly since 2005 have produced annotation for English which amount to a total of several thousand entailing Premise-Hypothesis sentence pairs, which we will call entailment pairs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) P: Swedish bond yields end 21 basis points higher. H: Swedish bond yields rose further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From the machine learning perspective assumed by many approaches to TE, this is a very small number of examples, given the complex nature of entailment. Given the problems of manual annotation, therefore, Burger and Ferro (2005) proposed to take advantage of the structural properties of a particular type of discourse -namely newspaper articles -to automatically harvest entailment pairs. They proposed to pair the title of each article with its first sentence, interpreting the first sentence as Premise and the title as Hypothesis. Their results were mixed, with an average of 50% actual entailment pairs among all pairs constructed in this manner. SVMs which identified \"entailment-friendly\" documents based on their bags of words lead to an accuracy of 77%. Building on the same general idea, Hickl et al. (2006) applied a simple unsupervised filter which removes all entailment pair candidates that \"did not share an entity (or an NP)\". They report an accuracy of 91.8% on a manually evaluated sample -considerably better Burger and Ferro. The article however does not mention the size of the original corpus, and whether \"entity\" is to be understood as named entity, so it is difficult to assess what its recall is, and whether it presupposes a high-quality NER system. In this paper, we model the task using a logistic regression model that allows us to synchronously analyse the data and predict entailment pairs, and focus on the question of how well these results generalize across domains and languages, for many of which no entailment pairs are available at all. We make three main contributions: (a), we define an annotation scheme based on semantic and discourse phenomena that can break entailment and annotate two datasets with it; (b), we idenfiy two robust properties of sentence pairs that correlate strongly with entailment and which are robust enough to support high-precision entailment pair extraction; (c), we find that cross-domain differences are actually larger than cross-lingual differences, even for languages as different as German and Hindi.", "cite_spans": [ { "start": 205, "end": 228, "text": "Burger and Ferro (2005)", "ref_id": "BIBREF4" }, { "start": 798, "end": 817, "text": "Hickl et al. (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Plan of the paper. Section 2 defines our annotation scheme. In Section 3, we sketch the logistic regression framework we use for analysis, and motivate our choice of predictors. Sections 4 and 5 present the two experiments on language and domain comparisons, respectively. We conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The motivation of our annotation scheme is to better understand why entailment breaks down between titles and first sentences of newswire articles. We subdivide the general no entailment category of earlier studies according to an inventory of reasons for non-entailment that we collected from an informal inspection of some dozen articles from an English-language newspaper. Additionally, we separate out sentences that are ill-formed in the sense of not forming one proposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A fine-grained annotation scheme for entailment pairs", "sec_num": "2" }, { "text": "No-par (Partial entailment). The Premise entails the Hypothesis almost, but not completely, in one of two ways: (a), The Hypothesis is a conjunction and the Premise entails just one conjunct; or (b), Premise and Hypothesis share the main event, but the Premise is missing an argument or adjunct that forms part of the Hypothesis. Presumably, in our setting, such information is provided by the other sentences in the article than the first one. In Ex. (1), if P and H were switched, this would be the case for the size of the rise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtypes of non-entailment", "sec_num": "2.1" }, { "text": "No-pre (Presupposition): The Premise uses a construction which can only be understood with information from the Hypothesis, typically a definite description or an adjunct. This category arises because the title stands before the first sentence and is available as context. In the following example, the Premise NP \"des Verbandes\" can only be resolved through the mention of \"VDA\" (the German car manufacturer's association) in the Hypothesis. No-emb (Embedding): The Premise uses an embedding that breaks entailment (e.g., modal adverbials or non-factural embedding verb). In the following pair, the proposition in the Hypothesis is embedded under \"expect\". No-oth (Other): All other negative examples where Premise and Hypothesis are well-formed, and which could not be assigned to a more specific category, are included under this tag. In this sense, \"Other\" is a catch-all category. Often, Premise and Hypothesis, taken in isolation, are simply unrelated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtypes of non-entailment", "sec_num": "2.1" }, { "text": "(5) P: Victor the Parrot kept shrieking \"Voda, Voda\" -\"Water, Water\". H: Thirsty jaguar procures water for Bulgarian zoo.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtypes of non-entailment", "sec_num": "2.1" }, { "text": "Err (Error): These cases arise due to errors in sentence boundary detection: Premise or Hypothesis may be cut off in the middle of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ill-formed sentence pairs", "sec_num": "2.2" }, { "text": "Ill (Ill-formed): Often, the titles are not single grammatical sentences and can therefore not be interpreted sensibly as the Hypothesis of an entailment pair. They can be incomplete proposition such as NPs or PPs (\"Beautiful house situated in woods\"), or, frequently, combinations of multiple sentences (\"RESEARCH ALERT -Mexico upped, Chile cut.\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ill-formed sentence pairs", "sec_num": "2.2" }, { "text": "We will model the entailment annotation labels on candidate sentence pairs using a logistic regression model. From a machine learning point of view, logistic regression models can be seen as a rather simple statistical classifier which can be used to acquire new entailment pairs. From a linguistic point of view, they can be used to explain the phenomena in the data, see e.g., Bresnan et al. (2007) . Formally, logistic regression models assume that datapoints consist of a set of predictors x and a binary response variable y. They have the form", "cite_spans": [ { "start": 379, "end": 400, "text": "Bresnan et al. (2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling entailment with logistic regression", "sec_num": "3" }, { "text": "p(y = 1) = 1 1 + e \u2212z with z = i \u03b2 i x i (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling entailment with logistic regression", "sec_num": "3" }, { "text": "where p is the probability of a datapoint x, \u03b2 i is the coefficient assigned to the linguistically motivated factor x i . Model estimation sets the parameters \u03b2 so that the likelihood of the observed data is maximized. From the linguistics perspective, we are most interested in analysing the importance of the different predictors: for each predictor x i , the comparison of the estimated value of its coefficient \u03b2 i can be compared to its estimated standard error, and it is possible to test the hypothesis that \u03b2 i = 0, i.e., the predictor does not significantly contribute to the model. Furthermore, the absolute value of \u03b2 i can be interpreted as the log odds -that is, as the change in the probability of the response variable being positive depending on x i being positive. e \u03b2 i = P (y = 1|x = 1, . . . )/P (y = 0|x = 1, . . . ) P (y = 1|x = 0, . . . )/P (y = 0|x = 0, . . . )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling entailment with logistic regression", "sec_num": "3" }, { "text": "The fact that z is just a linear combination of predictor weights encodes the assumption that the log odds combine linearly among factors. From the natural language processing perspective, we would like to create predictions for new observations. Note, however, that simply assessing the significance of predictors on some dataset, as provided by the logistic regression model, corresponds to an evaluation of the model on the training set, which is prone to the problem of overfitting. We will therefore in our experiments always apply the models acquired from one dataset on another to see how well they generalize.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling entailment with logistic regression", "sec_num": "3" }, { "text": "Next, we need a set of plausible predictors that we can plug into the logistic regression framework. These predictors should ideally be language-independent. We analyse the categories of our annotation, as an inventory of phenomena that break entailment, to motivate a small set of robust predictors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Choice of Predictors", "sec_num": "3.1" }, { "text": "Following early work on textual entailment, we use word overlap as a strong indicator of entailment (Monz and de Rijke, 2001 ). Our weighted overlap predictor uses the well-known tf/idf weighting scheme to compute the overlap between P and H (Manning et al., 2008) :", "cite_spans": [ { "start": 100, "end": 124, "text": "(Monz and de Rijke, 2001", "ref_id": "BIBREF10" }, { "start": 242, "end": 264, "text": "(Manning et al., 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Choice of Predictors", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "weightedOverlap(T, H, D) = w\u2208T \u2229H tf-idf(w, D) w\u2208H tf-idf(w, D)", "eq_num": "(3)" } ], "section": "Choice of Predictors", "sec_num": "3.1" }, { "text": "where we treat each article as a separate document and the whole corpus as document collection D. We expect that No-oth pairs have generally the lowest weighted overlap, followed by No-par pairs, while Yes pairs have the highest weighted overlap. We also use a categorical version of this observation in the form of our strict noun match predictor. This predictor is similar in spirit to the proposal by Hickl et al. (2006) mentioned in Section 1. The boolean strict noun match predictor is true if all Hypothesis nouns are present in the Premise, and is therefore a predictor that is geared at precision rather than recall. A third predictor that was motivated by the No-par and No-oth categories was the number of words in the article: No-oth sentence pairs often come from long articles, where the first sentence provides merely an introduction. For this predictor, log num words, we count the total number of words in the article and logarithmize it. 1 The remaining subcategories of No were more difficult to model. No-pre pairs should be identifiable by testing whether the Premise contains a definite description that cannot be accommodated, a difficult problem that seems to require world knowledge. Similarly, the recognition of contradictions, as is required to find No-con pairs, is very difficult in itself (de Marneffe et al., 2008) . Finally, No-emb requires the detection of a counterfactual context in the Premise. Since we do not currently see robust, language-independent ways of modelling these phenomena, we do not include specific predictors to address them. The situation is similar with regard to the Err category. While it might be possible to detect incomplete sentences with the help of a parser, this again involves substantial knowledge about the language. The Ill category, however, appears easier to target: at least cases of Hypotheses consisting of multiple phrases case be detected easily by checking for sentence end markers in the middle of the Hypothesis (full stop, colon, dash). We call this predictor punctuation.", "cite_spans": [ { "start": 404, "end": 423, "text": "Hickl et al. (2006)", "ref_id": "BIBREF8" }, { "start": 1319, "end": 1345, "text": "(de Marneffe et al., 2008)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Choice of Predictors", "sec_num": "3.1" }, { "text": "This experiment performs a cross-lingual comparison of three newswire corpora. We use English, German, and Hindi. All three belong to the Indo-European language family, but English and German are more closely related.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data sources and preparation", "sec_num": "4.1" }, { "text": "For English and German, we used the Reuters RCV2 Multilingual Corpus 2 . RCV2 contains over 487,000 news stories in 13 different languages. Almost all news stories cover the business and politics domains. The corpus marks the title of each article; we used the sentence splitter provided by Treetagger (Schmid, 1995) to extract the first sentences. Our Hindi corpus is extracted from the text collection of South Asian languages prepared by the EMILLE project (Xiao et al., 2004) 3 . We use the Hindi monolingual data, which was crawled from Webdunia, 4 an Indian daily online newspaper. The articles are predominantly political, with a focus on Indo-Pakistani and Indo-US affairs. We identify sentence boudaries with the Hindi sentence marker ('|'), which is used exclusively for this purpose.", "cite_spans": [ { "start": 302, "end": 316, "text": "(Schmid, 1995)", "ref_id": "BIBREF12" }, { "start": 460, "end": 479, "text": "(Xiao et al., 2004)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Data sources and preparation", "sec_num": "4.1" }, { "text": "We preprocessed the data by extracting the title and the first sentence, treating the first sentence as Premise and the title as Hypothesis. We applied a filter to remove pairs where the chance of entailment was impossible or very small. Specifically, our filter keeps only sentence pairs that (a) share at least one noun and where (b) both sentences include at least one verb and are not questions. Table 1 shows the corpus sizes before and after filtering. Note that the percentage of selected sentences across the languages are all in the 45%-55% range. This filter could presumably be improved by requiring a shared named entity, but since language-independent NER is still an open research issue, we did not follow up on this avenue. We randomly sampled 1,000 of the remaining sentence pairs per language for manual annotation.", "cite_spans": [], "ref_spans": [ { "start": 400, "end": 407, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data sources and preparation", "sec_num": "4.1" }, { "text": "First, we compared the frequencies of the annotation categories defined in Section 3.1. The results are shown in Table 2 . We find our simple preprocessing filter results in an accuracy of between 75 and 82%. This is still considerably below the results of Hickl et al., who report 92% accuracy on their English data. 5 Even though the overall percentage of \"yes\" cases is quite similar among languages, the details of the distribution differ. One fairly surprising observation was the fairly large number of ill-formed sentence pairs. As described in Section 2, this category comprises cases where the Hypothesis (i.e., a title) is not a grammatical sentence. Further analysis of the category shows that the common patterns are participle constructions (Ex. (6)) and combinations of multiple statements (Ex. (7)). The participle construction is particularly prominent in German. 6Glencoe Electric, Minn., rated single-A by Moody's. The \"no\"-categories make up a total of 11.3% (English), 6.6% (German), and 20.7% (Hindi). The \"other\" and \"partial\" categories clearly dominate. This is to be expected, in particular the high number of partial entailments. The \"other\" category mostly consists of cases where the title summarizes the whole article, but the first sentence provides only a gentle introduction to the topic:", "cite_spans": [ { "start": 318, "end": 319, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Distribution of annotation categories", "sec_num": "4.2" }, { "text": "(8) P: One automotive industry analyst has dubbed it the 'Lincoln Town Truck'. H: Ford hopes Navigator will lure young buyers to Lincoln.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distribution of annotation categories", "sec_num": "4.2" }, { "text": "As regards the high ratio of \"no-other\" cases in the Hindi corpus, we found a high number of instances where the title states the gist of the article too differently from the first sentence to preserve entailment: The remaining error categories (embedding, presupposition, contradiction) were, disappointingly, almost absent. Another sizable category is formed by errors, though. We find the highest percentage for English, where our sentence splitter misinterpreted full stops in abbreviations as sentence boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distribution of annotation categories", "sec_num": "4.2" }, { "text": "We estimated logistic regression models on each dataset, using the predictors from Section 3.1. Considering the eventual goal of extracting entailment pairs, we use the decision yes vs. everything else as our response variable. The analysis was performed with R, using the rms 6 and ROCR 7 packages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modelling the data", "sec_num": "4.3" }, { "text": "Analysis of predictors. The coefficients for the predictors and their significances are shown in Table 3 . There is considerable parallelism between the languages. In all three languages, weighted overlap between H and P is a significant predictor: high overlap indicates entailment, and vice versa. Its effect size is large as well: Perfect overlap increases the probability of entailment for German by a factor of e 0.77 = 2.16, for English by 10, and for Hindi even by 28. Similarly, the punctuation predictor comes out as a significant negative effect for all three languages, presumably by identifying ill-formed sentence pairs. In contrast, the length of the article (log num words) is not a significant predictor. This is a surprising result, given our hypothesis that long articles often involve an \"introduction\" which reduces the chance for entailment between the title and the first sentence. The explanation is that the two predictors, log num words and weighted overlap, are highly significantly correlated in all three corpora. Since weighted overlap is the predictive of the two, the model discards article length. Finally, strict noun match, which requires that all nouns match between H and P, is assigned a positive coefficient for each language, but only reaches significance for Hindi. This is the only genuine cross-lingual difference: In our Hindi corpus, the titles are copied more verbatim from the text than for English and German (median weighted overlap: Hindi 0.76, English 0.72, German 0.69). Consequently, in English and German the filter discards too many entailment instances. For all three languages, though, the coefficient is small -for Hindi, where it is largest, it increases the odds by a factor of e 0.39 \u2248 1.4.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 104, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Modelling the data", "sec_num": "4.3" }, { "text": "Evaluation. We trained models on the three corpora, using only the two predictors that contributed significantly in all languages (weighted overlap and punctuation), in order to avoid overfitting on the individual datasets. 8 We applied each model to each dataset. How such models should be evaluated depends on the intended purpose of the classification. We assume that it is fairly easy to obtain large corpora of newspaper text, which makes precision an issue rather than recall. The logistic regression classifier assigns a probability to each datapoint, so we can trade off recall and precision. We fix recall at a reasonable value (30%) and compare precision values. Table 4 : Exp. 1: Precision for the class \"yes\" (entailment) at 30% Recall", "cite_spans": [ { "start": 224, "end": 225, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 673, "end": 680, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Modelling the data", "sec_num": "4.3" }, { "text": "Our expectation is that each model will perform best on its own corpus (since this is basically the training data), and worse on the other languages. The size of the drop for the other languages reflects the differences between the corpora as well as the degree of overfitting models show to their training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modelling the data", "sec_num": "4.3" }, { "text": "The actual results are shown in Table 4 .3. The precision is fairly high, generally over 90%, and well above the baseline percentage of entailment pairs. The German data is modelled best by the German model, with the two other models performing 3 percent worse. The situation is similar, although less pronounced, on Hindi data, where the Hindi-trained model is 0.4% better than the two other models. For English, the Hindi model even outperforms the English model by 0.3% 9 , which in turn works about 1% better than the German model. In sum, the logistic regression models can be applied very well across languages, with little loss in precision. The German data with its high ratio of ill-formed headlines (cf. Table 2) is most difficult to model. Hindi is simplest, due to the tendency of title and first sentence to be almost identical (cf. the large weight for the overlap predictor).", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 39, "text": "Table 4", "ref_id": null }, { "start": 714, "end": 722, "text": "Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Modelling the data", "sec_num": "4.3" }, { "text": "5 Experiment 2: Analysis by Domain of German corpora", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modelling the data", "sec_num": "4.3" }, { "text": "This experiment compares three German corpora from different newspapers to study the impact of domain differences: Reuters, \"Stuttgarter Zeitung\", and \"Die Zeit\". These corpora differ in domain and in style. The Reuters corpus was already described in Section 4.1. \"Stuttgarter Zeitung\" (StuttZ) is a daily regional newspaper which covers international business and politics like Reuters, but does not draw its material completely from large news agencies and gives more importance to regional and local events. Its style is therefore less consistent. Our corpus covers some 80,000 sentences of text from StuttZ. The third corpus comprises over 4 million sentences of text from \"Die Zeit\", a major German national weekly. The text is predominantly from the 2000s, plus selected articles from the 1940s through 1990s. \"Die Zeit\" focuses on op-ed pieces and general discussions of political and social issues. It also covers arts and science, which the two other newspapers rarely do.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "We extracted and annotated entailment pair candidates in the same manner as before (cf. Section 4.1). The new breakdown of annotation categories in Table ( 10) shows, in comparison to the cross-lingual results in Table 2 , a higher incidence of errors, which we attribute to formatting problems of these corpora. Compared to the German Reuters corpus we considered in Exp. 1, StuttZ and Die Zeit contain considerably fewer entailment pairs, most notably Die Zeit, where the percentage of entailment pairs is just 21.6% in our sample, compared to 82.3% for Reuters. Notably, there are almost no cases where the first sentence represents a partial entailment; in contrast, for more than one third of the examples (33.9%), there is no entailment relation between the title and the first sentence. This seems to be a domain-dependent, or even stylistic, effect: in \"Die Zeit\", titles are often designed solely as \"bait\" to interest readers in the article: Other titles are just noun or verb phrases, which accounts for the large number (39%) of ill-formed pairs.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 155, "text": "Table (", "ref_id": null }, { "start": 213, "end": 220, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Distribution of annotation categories", "sec_num": "5.2" }, { "text": "Predictors and evaluation. The predictors of the logistic regression models for the three German corpora are shown in Table 6 . The picture is strikingly similar to the results of Exp. 1 (Table 3) : weighted overlap and punctuation are highly significant predictors for all three corpora (except punctuation, which is insignificant for StuttZ); even the effect sizes are roughly similar. Again, neither sentence length nor strict noun match are significant. This indicates that the predictors we have identified work fairly robustly. Unfortunately, this does not imply that they always work well. Table 6 shows the precision of the predictors in Exp. 2, again at 30% Recall. Here, the difference to Exp. 1 (Table 4. 3) is striking. First, overfitting of the predictors is worse across domains, with losses of 5% on Reuters and Die Zeit when they are classified with models trained on other corpora even though use just two generic features. Second, and more seriously, it is much more difficult to extract entailment pairs from the Stuttgarter Zeitung corpus and, especially, the Die Zeit corpus. For the latter, we can obtain a precision of at most 46.7%, compared to >90% in Exp. 1. We interpret this result as evidence that domain adaptation may be an even greater challenge than multilinguality in the acquisition of entailment pairs. More specifically, our impression is that the heuristic of pairing title and first sentence works fairly well for a particular segment of newswire text, but not otherwise. This segment consists of factual, \"no-nonsense\" articles provided by large news agencies such as Reuters, which tend to be simple in their discourse structure and have an informative title. In domains where articles become longer, and the intent to entertain becomes more pertinent (as for Die Zeit), the heuristic fails very frequently. Note that the weighted overlap predictor cannot recover all negative cases. Example (10) is a case in point: one of the two informative words in H, \"Doris\" and \"gr\u00fcn\", is in fact in P.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 187, "end": 196, "text": "(Table 3)", "ref_id": null }, { "start": 597, "end": 604, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 706, "end": 715, "text": "(Table 4.", "ref_id": null } ], "eq_spans": [], "section": "Modelling the data", "sec_num": "5.3" }, { "text": "Domain specificity. The fact that it is difficult to extract entailment pairs from some corpora is serious exactly because, according to our intuition, the \"easier\" news agency corpora (like Reuters) are domain-Corpus D( \u2022 | deWac) words w with highest P (w)/Q(w) Reuters 0.98 H\u00e4ndler (trader), B\u00f6rse (exchange), Prozent (per cent), erkl\u00e4rte (stated) StuttZ 0.93 DM (German Mark), Prozent (per cent), Millionen (millions), Gesch\u00e4ftsjahr (fiscal year), Milliarden (billions) Die Zeit 0.64 hei\u00dft (means), wei\u00df (knows), l\u00e4\u00dft (leaves/lets) Table 8 : Exp. 2: Domain specificity (KL distance from deWac); typical content words specific. We quantify this intuition with an approach by Ciaramita and Baroni (2006) , who propose to model the representativeness of web-crawled corpora as the KL divergence between their Laplacesmoothed unigram distribution P and that of a reference corpus, Q (w \u2208 W are vocabulary words):", "cite_spans": [ { "start": 678, "end": 705, "text": "Ciaramita and Baroni (2006)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 536, "end": 543, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Modelling the data", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D(P, Q) = w\u2208W P (w) log P (w) Q(w)", "eq_num": "(4)" } ], "section": "Modelling the data", "sec_num": "5.3" }, { "text": "We use the deWac German web corpus (Baroni et al., 2009) as reference, making the idealizing assumption that it is representative for the German language. We interpret a large distance from deWac as domain specificity. The results in Table 8 bear out our hypothesis: Die Zeit is less domain specific than StuttZ, which in turn is less specific than Reuters. The table also lists the content words (nouns/verbs) that are most typical for each corpus, i.e., which have the highest value of P (w)/Q(w). The lists bolster the interpretation that Reuters and StuttZ concentrate on the economical domain, while the typical terms of Die Zeit show an argumentative style, but no obvious domain bias. In sum, domain specificity is inversely correlated with the difficulty of extracting entailment pairs: from a representativity standpoint, we should draw entailment pairs from Die Zeit.", "cite_spans": [ { "start": 35, "end": 56, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Modelling the data", "sec_num": "5.3" }, { "text": "In this paper, we have discussed the robustness of extracting entailment pairs from the title and first sentence of newspaper articles. We have proposed a logistic regression model and have analysed its performance on two datasets that we have created: a cross-lingual one a cross-domain one. Our crosslingual experiment shows a positive result: despite differences in the distribution of annotation categories across domains and languages, the predictors of all logistic regression models look remarkably similar. In particular, we have found two predictors which are correlated significantly with entailment across (almost) all languages and domains. These are (a), a tf/idf measure of word overlap between the title and the first sentence; and (b), the presence of punctuation indicating that the title is not a single grammatical sentence. These predictors extract entailment pairs from newswire text at a precision of > 90%, at a recall of 30%, and represent a simple, cross-lingually robust method for entailment pair acquisition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The cross-domain experiment, however, forces us to qualify this positive result. On two other German corpora from different newspapers, we see a substantial degradation of the model's performance. It may seem surprising that cross-domain robustness is a larger problem than cross-lingual robustness. Our interpretation is that the limiting factor is the degree to which the underlying assumption, namely that first sentence entails the title, is true. If the assumption is true only for a minority of sentences, our predictors cannot save the day. This assumption holds well in the Reuters corpora, but less so for the other newspapers. Unfortunately, we also found that the Reuters corpora are at the same time thematically constrained, and therefore only of limited use for extracting a representative corpus of entailment pairs. A second problem is that the addition of features we considered beyond the two mentioned above threatens to degrade the classifier due to overfitting, at least across domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Given these limitation of the present headline-based approach, other approaches that are more generally applicable may need to be explored. Entailment pairs have for example been extracted from Wikipedia (Bos et al., 2009) . Another direction is to build on methods to extract paraphrases from comparable corpora (Barzilay and Lee, 2003) , and extend them to capture asymmetrical pairs, where entailment holds in one, but not the other, direction.", "cite_spans": [ { "start": 204, "end": 222, "text": "(Bos et al., 2009)", "ref_id": "BIBREF2" }, { "start": 313, "end": 337, "text": "(Barzilay and Lee, 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This makes the coefficiently easier to interpret. The predictive difference is minimal. 2 http://trec.nist.gov/data/reuters/reuters.html 3 http://www.elda.org/catalogue/en/text/W0037.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.webdunia.com 5 We attribute the difference to the filtering scheme which is difficult to reconstruct fromHickl et al. (2006).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://biostat.mc.vanderbilt.edu/twiki/bin/view/Main/Design 7 http://rocr.bioinf.mpi-sb.mpg.de/ 8 Subsequent analysis of \"full\" models (with all features) showed that they did not generally improve over two-feature models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The English model outperforms the Hindi model at higher recall levels, though.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments. The first author would like to acknowledge the support of a WISE scholarship granted by DAAD (German Academic Exchange Service).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The wacky wide web: A collection of very large linguistically processed web-crawled corpora", "authors": [ { "first": "M", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "S", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "A", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "E", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "Journal of Language Resources and Evaluation", "volume": "43", "issue": "3", "pages": "209--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni, M., S. Bernardini, A. Ferraresi, and E. Zanchetta (2009). The wacky wide web: A collection of very large linguistically processed web-crawled corpora. Journal of Language Resources and Evaluation 43(3), 209-226.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning to paraphrase: An unsupervised approach using multiplesequence alignment", "authors": [ { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barzilay, R. and L. Lee (2003). Learning to paraphrase: An unsupervised approach using multiple- sequence alignment. In Proceedings of HLT/NAACL, Edmonton, AL, pp. 16-23.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Textual entailment at EVALITA", "authors": [ { "first": "J", "middle": [], "last": "Bos", "suffix": "" }, { "first": "M", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "F", "middle": [ "M" ], "last": "Zanzotto", "suffix": "" } ], "year": 2009, "venue": "Proceedings of IAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bos, J., M. Pennacchiotti, and F. M. Zanzotto (2009). Textual entailment at EVALITA 2009. In Proceedings of IAAI, Reggio Emilia.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Predicting the dative alternation", "authors": [ { "first": "J", "middle": [], "last": "Bresnan", "suffix": "" }, { "first": "A", "middle": [], "last": "Cueni", "suffix": "" }, { "first": "T", "middle": [], "last": "Nikitina", "suffix": "" }, { "first": "H", "middle": [], "last": "Baayen", "suffix": "" } ], "year": 2007, "venue": "Cognitive Foundations of Interpretation", "volume": "", "issue": "", "pages": "69--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bresnan, J., A. Cueni, T. Nikitina, and H. Baayen (2007). Predicting the dative alternation. In G. Bouma, I. Kraemer, and J. Zwarts (Eds.), Cognitive Foundations of Interpretation, pp. 69-94. Royal Netherlands Academy of Science.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Generating an entailment corpus from news headlines", "authors": [ { "first": "J", "middle": [], "last": "Burger", "suffix": "" }, { "first": "L", "middle": [], "last": "Ferro", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", "volume": "", "issue": "", "pages": "49--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burger, J. and L. Ferro (2005). Generating an entailment corpus from news headlines. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pp. 49-54.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A figure of merit for the evaluation of web-corpus randomness", "authors": [ { "first": "M", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "M", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "217--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciaramita, M. and M. Baroni (2006). A figure of merit for the evaluation of web-corpus randomness. In Proceedings of EACL, Trento, Italy, pp. 217-224.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The PASCAL recognising textual entailment challenge", "authors": [ { "first": "I", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "B", "middle": [], "last": "O. Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Machine Learning Challenges", "volume": "3944", "issue": "", "pages": "177--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, I., O. Glickman, and B. Magnini (2006). The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Volume 3944 of Lecture Notes in Computer Science, pp. 177-190. Springer.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Finding contradictions in text", "authors": [ { "first": ", M.-C", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "A", "middle": [ "N" ], "last": "Rafferty", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "1039--1047", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Marneffe, M.-C., A. N. Rafferty, and C. D. Manning (2008). Finding contradictions in text. In Proceedings of the ACL, Columbus, Ohio, pp. 1039-1047.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Recognizing textual entailment with LCC's Groundhog system", "authors": [ { "first": "A", "middle": [], "last": "Hickl", "suffix": "" }, { "first": "J", "middle": [], "last": "Williams", "suffix": "" }, { "first": "J", "middle": [], "last": "Bensley", "suffix": "" }, { "first": "K", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "B", "middle": [], "last": "Rink", "suffix": "" }, { "first": "Y", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Second PASCAL Challenges Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hickl, A., J. Williams, J. Bensley, K. Roberts, B. Rink, and Y. Shi (2006). Recognizing textual entailment with LCC's Groundhog system. In Proceedings of the Second PASCAL Challenges Workshop.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Introduction to Information Retrieval", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "P", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D., P. Raghavan, and H. Sch\u00fctze (2008). Introduction to Information Retrieval (1st ed.). Cambridge University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Light-weight entailment checking for computational semantics", "authors": [ { "first": "C", "middle": [], "last": "Monz", "suffix": "" }, { "first": "M", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ICoS", "volume": "", "issue": "", "pages": "59--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Monz, C. and M. de Rijke (2001). Light-weight entailment checking for computational semantics. In Proceedings of ICoS, Siena, Italy, pp. 59-72.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Ask Not What Textual Entailment Can Do for You", "authors": [ { "first": "M", "middle": [], "last": "Sammons", "suffix": "" }, { "first": "V", "middle": [], "last": "Vydiswaran", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1199--1208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sammons, M., V. Vydiswaran, and D. Roth (2010). \"Ask Not What Textual Entailment Can Do for You...\". In Proceedings of ACL, Uppsala, Sweden, pp. 1199-1208.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improvements in part-of-speech tagging with an application to german", "authors": [ { "first": "H", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the SIGDAT Workshop at ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schmid, H. (1995). Improvements in part-of-speech tagging with an application to german. In Proceedings of the SIGDAT Workshop at ACL, Cambridge, MA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Developing Asian language corpora: Standards and practice", "authors": [ { "first": "Z", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "T", "middle": [], "last": "Mcenery", "suffix": "" }, { "first": "P", "middle": [], "last": "Baker", "suffix": "" }, { "first": "A", "middle": [], "last": "Hardie", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Fourth Workshop on Asian Language Resources", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao, Z., T. McEnery, P. Baker, and A. Hardie (2004). Developing Asian language corpora: Standards and practice. In In Proceedings of the Fourth Workshop on Asian Language Resources, Sanya, China, pp. 1-8.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "content": "
(4)P: Court Monday for a rehearing, a court official said.
H: Arkansas gaming petition goes before court again Monday
", "text": "An Arkansas gambling amendment [...] is expected to be submitted to the state Supreme", "type_str": "table", "html": null }, "TABREF2": { "num": null, "content": "
EnglishGermanHindi
Original473,874 (100%) 112,259 (100%) 20,209 (100%)
Filtered264.711 (55.8%) 50.039 (44.5%) 10.475 (51.8%)
Table 1: Pair extraction statistics
Corpuserrillno-con no-emb no-oth no-par no-pre yes
English Reuters 3.5 2.900.23.77.4082.3
German Reuters 2.1 11.00.40.24.32.10.279.7
Hindi Emille1.1 2.500.314.75.7075.7
Table 2: Exp.1: Distribution of annotation categories (in percent)
", "text": "No. of sentence pairs", "type_str": "table", "html": null }, "TABREF7": { "num": null, "content": "
: Exp. 2: Predictors in the logreg model (*: p<0.05; **: p<0.01; ***: p<0.001)
P P P P P P P P P Data Models Reuters StuttZ Die Zeit
Reuters91.685.491.6
StuttZ83.083.082.6
Die Zeit45.245.246.7
", "text": "", "type_str": "table", "html": null }, "TABREF8": { "num": null, "content": "
: Exp. 2: Precision for the class \"yes\" at 30% recall
H: Doris,esistgr\u00fcn!
Doris,itisgreen!
", "text": "", "type_str": "table", "html": null } } } }