lexical semantics
stringclasses 11
values | predicate-argument structure
stringclasses 24
values | logic
stringclasses 30
values | knowledge
stringclasses 3
values | domain
stringclasses 5
values | premise
stringlengths 11
296
| hypothesis
stringlengths 11
296
| label
int64 0
2
|
---|---|---|---|---|---|---|---|
Lexical entailment | ACL | They then use a discriminative model to rerank the translation output using additional nonworld level features. | They then use a generative model to rerank the translation output using additional nonworld level features. | 2 |
|||
Lexical entailment | ACL | They then use a generative model to rerank the translation output using additional nonworld level features. | They then use a discriminative model to rerank the translation output using additional nonworld level features. | 2 |
|||
Lexical entailment | ACL | In contrast to standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue. | Unlike in standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue. | 0 |
|||
Lexical entailment | ACL | Unlike in standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue. | In contrast to standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue. | 0 |
|||
World knowledge | ACL | A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding. | Logits are then computed for these actions and particular actions are chosen according to a softmax over these logits during training and decoding. | 0 |
|||
World knowledge | ACL | Logits are then computed for these actions and particular actions are chosen according to a softmax over these logits during training and decoding. | A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding. | 0 |
|||
World knowledge | ACL | A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding. | A distribution is then computed over these actions using a maximum-entropy approach and particular actions are chosen accordingly during training and decoding. | 0 |
|||
World knowledge | ACL | A distribution is then computed over these actions using a maximum-entropy approach and particular actions are chosen accordingly during training and decoding. | A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding. | 1 |
|||
Lexical entailment | ACL | A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding. | A distribution is then computed over these actions using a softmax function and particular actions are chosen randomly during training and decoding. | 2 |
|||
Lexical entailment | ACL | A distribution is then computed over these actions using a softmax function and particular actions are chosen randomly during training and decoding. | A distribution is then computed over these actions using a softmax function and particular actions are chosen accordingly during training and decoding. | 2 |
|||
Common sense | ACL | The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue. | The systems thus produced support the capability to interrupt an interlocutor mid-sentence. | 0 |
|||
Common sense | ACL | The systems thus produced support the capability to interrupt an interlocutor mid-sentence. | The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue. | 1 |
|||
Lexical entailment | ACL | The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue. | The systems thus produced are incremental: dialogues are processed sentence-by-sentence, shown previously to be essential in supporting natural, spontaneous dialogue. | 2 |
|||
Lexical entailment | ACL | The systems thus produced are incremental: dialogues are processed sentence-by-sentence, shown previously to be essential in supporting natural, spontaneous dialogue. | The systems thus produced are incremental: dialogues are processed word-by-word, shown previously to be essential in supporting natural, spontaneous dialogue. | 2 |
|||
World knowledge | ACL | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn. | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one-shot learning is sufficient. | 0 |
|||
World knowledge | ACL | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one-shot learning is sufficient. | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn. | 0 |
|||
Upward monotone | ACL | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn. | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, any number of examples is enough from which to learn. | 1 |
|||
Upward monotone | ACL | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, any number of examples is enough from which to learn. | Indeed, it is often stated that for humans to learn how to perform adequately in a domain, one example is enough from which to learn. | 1 |
|||
Named entities | World knowledge | ACL | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language generation. | 0 |
||
Named entities | World knowledge | ACL | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language generation. | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. | 0 |
||
Named entities | World knowledge | ACL | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language parsing. | 2 |
||
Named entities | World knowledge | ACL | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end natural language parsing. | We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. | 2 |
||
Morphological negation | Common sense | ACL | To assess the reliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977). | To assess the unreliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977). | 0 |
||
Morphological negation | Common sense | ACL | To assess the unreliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977). | To assess the reliability of ratings, we calculated the intra-class correlation coefficient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977). | 0 |
||
Common sense | ACL | We also show that metric performance is data- and system-specific. | We also show that metric performance varies between datasets and systems. | 0 |
|||
Common sense | ACL | We also show that metric performance varies between datasets and systems. | We also show that metric performance is data- and system-specific. | 0 |
|||
Common sense | ACL | We also show that metric performance is data- and system-specific. | We also show that metric performance is constant between datasets and systems. | 2 |
|||
Common sense | ACL | We also show that metric performance is constant between datasets and systems. | We also show that metric performance is data- and system-specific. | 2 |
|||
World knowledge | ACL | Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure. | Our experiments indicate that neural systems are quite good at surface-level language modeling, but perform quite poorly at capturing higher level semantics and structure. | 0 |
|||
World knowledge | ACL | Our experiments indicate that neural systems are quite good at surface-level language modeling, but perform quite poorly at capturing higher level semantics and structure. | Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure. | 0 |
|||
World knowledge | ACL | Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure. | Our experiments indicate that neural systems are quite good at capturing higher level semantics and structure but perform quite poorly at surface-level language modeling. | 2 |
|||
World knowledge | ACL | Our experiments indicate that neural systems are quite good at capturing higher level semantics and structure but perform quite poorly at surface-level language modeling. | Our experiments indicate that neural systems are quite good at producing fluent outputs and generally score well on standard word-match metrics, but perform quite poorly at content selection and at capturing long-term structure. | 2 |
|||
Common sense | ACL | Reconstruction-based techniques can also be applied at the document or sentence-level during training. | Reconstruction-based techniques can operate on multiple scales during training. | 0 |
|||
Common sense | ACL | Reconstruction-based techniques can operate on multiple scales during training. | Reconstruction-based techniques can also be applied at the document or sentence-level during training. | 0 |
|||
Common sense | ACL | Reconstruction-based techniques can also be applied at the document or sentence-level during training. | Reconstruction-based techniques can also be applied at the document or sentence-level during test. | 1 |
|||
Common sense | ACL | Reconstruction-based techniques can also be applied at the document or sentence-level during test. | Reconstruction-based techniques can also be applied at the document or sentence-level during training. | 1 |
|||
Disjunction | ACL | Reconstruction-based techniques can also be applied at the document or sentence-level during training. | Reconstruction-based techniques can only be applied at the sentence-level during training. | 2 |
|||
Disjunction | ACL | Reconstruction-based techniques can only be applied at the sentence-level during training. | Reconstruction-based techniques can also be applied at the document or sentence-level during training. | 2 |
|||
Lexical entailment;Quantifiers | ACL | In practice, our proposed extractive evaluation will pick up on many errors in this passage. | In practice, our proposed extractive evaluation will pick up on few errors in this passage. | 2 |
|||
Lexical entailment;Quantifiers | ACL | In practice, our proposed extractive evaluation will pick up on few errors in this passage. | In practice, our proposed extractive evaluation will pick up on many errors in this passage. | 2 |
|||
World knowledge | ACL | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test. | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of two named women characters talking about something besides men. | 0 |
|||
World knowledge | ACL | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of two named women characters talking about something besides men. | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test. | 0 |
|||
World knowledge | ACL | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test. | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of men in the narrative talking to each other about women. | 1 |
|||
World knowledge | ACL | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of men in the narrative talking to each other about women. | Similarly, the use of more agent-empowering verbs in female narratives decrease the odds of passing the Bechdel test. | 1 |
|||
Common sense | ACL | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power. | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they can forbid or permit actions and decisions. | 0 |
|||
Common sense | ACL | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they can forbid or permit actions and decisions. | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power. | 0 |
|||
Common sense | ACL | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power. | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they are blocked or allowed to do things by others. | 2 |
|||
Common sense | ACL | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are more often in positions where they are blocked or allowed to do things by others. | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power. | 2 |
|||
Intersectivity;Ellipsis/Implicits | ACL | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power. | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of low power. | 2 |
|||
Intersectivity;Ellipsis/Implicits | ACL | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of low power. | Furthermore, male characters use inhibitory language more (inhib), which contains words pertaining to blocking or allowing, suggesting that these characters are in positions of power. | 2 |
|||
Restrictivity | Non-monotone | Reddit | Looking at pictures online of people trying to take photos of mirrors they want to sell is my new thing... | Looking at pictures online of people trying to take photos of mirrors is my new thing... | 1 |
||
Restrictivity | Non-monotone | Reddit | Looking at pictures online of people trying to take photos of mirrors is my new thing... | Looking at pictures online of people trying to take photos of mirrors they want to sell is my new thing... | 1 |
||
Lexical entailment | Artificial | A serene wind rolled across the glade. | A tempestuous wind rolled across the glade. | 2 |
|||
Lexical entailment | Artificial | A tempestuous wind rolled across the glade. | A serene wind rolled across the glade. | 2 |
|||
Lexical entailment | Artificial | A serene wind rolled across the glade. | An easterly wind rolled across the glade. | 1 |
|||
Lexical entailment | Artificial | An easterly wind rolled across the glade. | A serene wind rolled across the glade. | 1 |
|||
Lexical entailment | Artificial | A serene wind rolled across the glade. | A calm wind rolled across the glade. | 0 |
|||
Lexical entailment | Artificial | A calm wind rolled across the glade. | A serene wind rolled across the glade. | 0 |
|||
Intersectivity | Upward monotone | Artificial | A serene wind rolled across the glade. | A wind rolled across the glade. | 0 |
||
Intersectivity | Upward monotone | Artificial | A wind rolled across the glade. | A serene wind rolled across the glade. | 1 |
||
World knowledge | Artificial | The reaction was strongly exothermic. | The reaction media got very hot. | 0 |
|||
World knowledge | Artificial | The reaction media got very hot. | The reaction was strongly exothermic. | 0 |
|||
World knowledge | Artificial | The reaction was strongly exothermic. | The reaction media got very cold. | 2 |
|||
World knowledge | Artificial | The reaction media got very cold. | The reaction was strongly exothermic. | 2 |
|||
World knowledge | Artificial | The reaction was strongly endothermic. | The reaction media got very hot. | 2 |
|||
World knowledge | Artificial | The reaction media got very hot. | The reaction was strongly endothermic. | 2 |
|||
World knowledge | Artificial | The reaction was strongly endothermic. | The reaction media got very cold. | 0 |
|||
World knowledge | Artificial | The reaction media got very cold. | The reaction was strongly endothermic. | 0 |
|||
Ellipsis/Implicits | Artificial | She didn't think I had already finished it, but I had. | I had already finished it. | 0 |
|||
Ellipsis/Implicits | Artificial | I had already finished it. | She didn't think I had already finished it, but I had. | 1 |
|||
Factivity | Ellipsis/Implicits | Negation | Artificial | She didn't think I had already finished it, but I had. | I hadn't already finished it. | 2 |
|
Factivity | Ellipsis/Implicits | Negation | Artificial | I hadn't already finished it. | She didn't think I had already finished it, but I had. | 2 |
|
Ellipsis/Implicits | Negation | Artificial | She thought I had already finished it, but I hadn't. | I had already finished it. | 2 |
||
Ellipsis/Implicits | Negation | Artificial | I had already finished it. | She thought I had already finished it, but I hadn't. | 2 |
||
Factivity | Ellipsis/Implicits | Artificial | She thought I had already finished it, but I hadn't. | I hadn't already finished it. | 0 |
||
Factivity | Ellipsis/Implicits | Artificial | I hadn't already finished it. | She thought I had already finished it, but I hadn't. | 1 |
||
Coordination scope | Artificial | Temple said that the business was facing difficulties, but didn't make any specific claims. | Temple didn't make any specific claims. | 0 |
|||
Coordination scope | Artificial | Temple didn't make any specific claims. | Temple said that the business was facing difficulties, but didn't make any specific claims. | 1 |
|||
Coordination scope | Artificial | Temple said that the business was facing difficulties, but didn't make any specific claims. | The business didn't make any specific claims. | 1 |
|||
Coordination scope | Artificial | The business didn't make any specific claims. | Temple said that the business was facing difficulties, but didn't make any specific claims. | 1 |
|||
Coordination scope | Artificial | Temple said that the business was facing difficulties, but didn't have a chance of going into the red. | Temple didn't have a chance of going into the red. | 1 |
|||
Coordination scope | Artificial | Temple didn't have a chance of going into the red. | Temple said that the business was facing difficulties, but didn't have a chance of going into the red. | 1 |
|||
Coordination scope | Artificial | Temple said that the business was facing difficulties, but didn't have a chance of going into the red. | Temple said the business didn't have a chance of going into the red. | 0 |
|||
Coordination scope | Artificial | Temple said the business didn't have a chance of going into the red. | Temple said that the business was facing difficulties, but didn't have a chance of going into the red. | 1 |
|||
Relative clauses | Artificial | The profits of the businesses that focused on branding were still negative. | The businesses that focused on branding still had negative profits. | 0 |
|||
Relative clauses | Artificial | The businesses that focused on branding still had negative profits. | The profits of the businesses that focused on branding were still negative. | 0 |
|||
Relative clauses | Artificial | The profits of the business that was most successful were still negative. | The profits that focused on branding were still negative. | 1 |
|||
Relative clauses | Artificial | The profits that focused on branding were still negative. | The profits of the business that was most successful were still negative. | 1 |
|||
Relative clauses | Artificial | The profits of the businesses that were highest this quarter were still negative. | The businesses that were highest this quarter still had negative profits. | 1 |
|||
Relative clauses | Artificial | The businesses that were highest this quarter still had negative profits. | The profits of the businesses that were highest this quarter were still negative. | 1 |
|||
Relative clauses | Artificial | The profits of the businesses that were highest this quarter were still negative. | For the businesses, the profits that were highest were still negative. | 0 |
|||
Relative clauses | Artificial | For the businesses, the profits that were highest were still negative. | The profits of the businesses that were highest this quarter were still negative. | 0 |
|||
Datives | Artificial | I baked him a cake. | I baked him. | 1 |
|||
Datives | Artificial | I baked him. | I baked him a cake. | 1 |
|||
Datives | Artificial | I baked him a cake. | I baked a cake for him. | 0 |
|||
Datives | Artificial | I baked a cake for him. | I baked him a cake. | 0 |
|||
Datives | Artificial | I gave him a note. | I gave a note to him. | 0 |
|||
Datives | Artificial | I gave a note to him. | I gave him a note. | 0 |
|||
Core args | Artificial | Jake broke the vase. | The vase broke. | 0 |
|||
Core args | Artificial | The vase broke. | Jake broke the vase. | 1 |