input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
There have been many studies on parsing techniques (Poller and Becker, 1998; Flickinger et al., 2000), ones on disambiguation models (Chiang, 2000; Kanayama et al., 2000), and ones on programming/grammar-development environ- Strongly equivalent grammars enable the sharing of ideas developed in each formalism. Our concern is, however, not limited to the sharing of grammars and lexicons.
Citation Sentence:
There have been many studies on parsing techniques ( Poller and Becker , 1998 ; Flickinger et al. , 2000 ) , ones on disambiguation models ( Chiang , 2000 ; Kanayama et al. , 2000 ) , and ones on programming/grammar-development environ -
Context after the citation:
1In this paper, we use the term LTAG to refer to FBLTAG, if not confusing. ment (Sarkar and Wintner, 1999; Doran et al., 2000; Makino et al., 1998). These works are restricted to each closed community, and the relation between them is not well discussed. Investigating the relation will be apparently valuable for both communities. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:47 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
In order to estimate the parameters of our model, we develop a blocked sampler based on that of Johnson et al. (2007) to sample parse trees for sentences in the raw training corpus according to their posterior probabilities. In this paper, we seek to learn from only raw data and an incomplete dictionary mapping some words to sets of potential supertags. Given our desire to train NLP models in low-supervision scenarios, the possibility of constructing inductive biases out of universal properties of language is enticing: if we can do this well, then it only needs to be done once, and can be applied to any language or domain without adaptation.
Citation Sentence:
In order to estimate the parameters of our model , we develop a blocked sampler based on that of Johnson et al. ( 2007 ) to sample parse trees for sentences in the raw training corpus according to their posterior probabilities .
Context after the citation:
However, due to the very large sets of potential supertags used in a parse, computing inside charts is intractable, so we design a Metropolis-Hastings step that allows us to sample efficiently from the correct posterior. Our experiments show that the incorporation of supertag context parameters into the model improves learning, and that placing combinability-preferring priors on those parameters yields further gains in many scenarios. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:470 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
The extraction procedure utilizes a head percolation table as introduced by Magerman (1995) in combination with a variation of Collinsâs (1997) approach to the differentiation between complement and adjunct. Chen and Vijay-Shanker (2000) explore a number of related approaches to the extraction of a lexicalized TAG from the Penn-II Treebank with the aim of constructing a statistical model for parsing. As these formalisms are fully lexicalized with an invariant (LTAG and CCG) or limited (HPSG) rule component, the extraction of a lexicon essentially amounts to the creation of a grammar.
Citation Sentence:
The extraction procedure utilizes a head percolation table as introduced by Magerman ( 1995 ) in combination with a variation of Collins 's ( 1997 ) approach to the differentiation between complement and adjunct .
Context after the citation:
This results in the construction of a set of lexically anchored elementary trees which make up the TAG in question. The number of frame types extracted (i.e., an elementary tree without a specific lexical anchor) ranged from 2,366 to 8,996. Xia (1999) also presents a similar method for the extraction of a TAG from the Penn Treebank. The extraction procedure consists of three steps: First, the bracketing of the trees in the Penn Treebank is corrected and extended based on the approaches of Magerman (1994) and Collins (1997). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:471 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Many other such cases are described in Danlos's book (Danlos 1987). This means that the microplanner will not be able to make optimal pronominalization decisions in cases where le or la are unambiguous, but l' is not, since it does not know word order and hence whether the pronoun will be abbreviated. But in a pipelined NLG system, pronominalization decisions are typically made earlier than word-ordering decisions; for example in the three-stage pipelined architecture presented by Reiter and Dale (2000), pronominalization decisions are made in the second stage (microplanning), but word ordering is chosen during the third stage (realization).
Citation Sentence:
Many other such cases are described in Danlos 's book ( Danlos 1987 ) .
Context after the citation:
The common theme behind many of these examples is that pipelines have difficulties satisfying linguistic constraints (such as unambiguous reference) or performing linguistic optimizations (such as using pronouns instead of longer referring expressions whenever possible) in cases where the constraints or optimizations depend on decisions made in multiple modules. This is largely due to the fact that pipelined systems cannot perform general search over a decision space that includes decisions made in more than one module. Despite these arguments, most applied NLG systems use a pipelined architecture; indeed, a pipeline was used in every one of the systems surveyed by Reiter (1994) and Paiva (1998). This may be because pipelines have many engineering advantages, and in practice the sort of problems pointed out by Danlos and other pipeline critics do not seem to be a major problem in current applied NLG systems (Mittal et al. 1998). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:472 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
⢠Before indexing the text, we process it with Textract (Byrd and Ravin, 1998; Wacholder et al., 1997), which performs lemmatization, and discovers proper names and technical terms. For example, the pattern "What is the population" gets replaced by "NUMBER$ population". Some templates do not cause complete replacement of the matched string.
Citation Sentence:
⢠Before indexing the text , we process it with Textract ( Byrd and Ravin , 1998 ; Wacholder et al. , 1997 ) , which performs lemmatization , and discovers proper names and technical terms .
Context after the citation:
We added a new module (Resporator) which annotates text segments with QA-Tokens using pattern matching. Thus the text "for 5 centuries" matches the DURATION$ pattern "for :CARDINAL _timeperiod", where :CARDINAL is the label for cardinal numbers, and _timeperiod marks a time expression. ⢠GuruQA scores text passages instead of documents. We use a simple documentand collection-independent weighting scheme: QA-Tokens get a weight of 400, proper nouns get 200 and any other word 100 (stop words are removed in query processing after the pattern template | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:473 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Therefore, we repeated the experiments with POS tags predicted by the MADA toolkit (Habash and Rambow 2005; Habash, Rambow, and Roth 2012)15 (see Table 2, 14 Some parsers predict POS tags internally, instead of receiving them as input, but this is not the case in this article. Put differently, we are interested in the tradeoff between relevance and accuracy. But in practice, POS tags are annotated by automatic taggers, so parsers get predicted POS tags as input, as opposed to gold (human-annotated) tags.14 The more informative the tag set, the less accurate the tag prediction might be, so the effect on overall parsing quality is unclear.
Citation Sentence:
Therefore , we repeated the experiments with POS tags predicted by the MADA toolkit ( Habash and Rambow 2005 ; Habash , Rambow , and Roth 2012 ) 15 ( see Table 2 , 14 Some parsers predict POS tags internally , instead of receiving them as input , but this is not the case in this article .
Context after the citation:
15 We use MADA v3.1 in all of our experiments. We note that MADA v3.1 was tuned on the same development set that we use for making our parsing model choices; ideally, we would have chosen a different development set for our work on parsing, but we thought it would be best to use MADA as a black box component (for past and future comparability), and did not have sufficient data to carve out from a second development set (while retaining a test set). We do not take this as a major concern for our results. In fact, although MADA was tuned to maximize its core POS accuracy (the untokenized version of CORE44), CORE44 did not yield best parsing quality on MADA-predicted input (see Table 2). | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:474 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Experiments on Chinese SRL (Xue and Palmer 2005, Xue 2008) reassured these findings. For semantic analysis, developing features that capture the right kind of information is crucial. They found out that different features suited for different sub tasks of SRL, i.e. semantic role identification and classification.
Citation Sentence:
Experiments on Chinese SRL ( Xue and Palmer 2005 , Xue 2008 ) reassured these findings .
Context after the citation:
In this paper, we mainly focus on the semantic role classification (SRC) process. With the findings about the linguistic discrepancy of different semantic role groups, we try to build a 2-step semantic role classifier with hierarchical feature selection strategy. That means, for different sub tasks, different models will be trained with different features. The purpose of this strategy is to capture the right kind of information of different semantic role groups. | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:475 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
An example of this is the estimation of maximum entropy models, from simple iterative estimation algorithms used by Ratnaparkhi (1998) that converge very slowly, to complex techniques from the optimisation literature that converge much more rapidly (Malouf, 2002). However, it will be increasingly important as techniques become more complex and corpus sizes grow. Efficiency has not been a focus for NLP research in general.
Citation Sentence:
An example of this is the estimation of maximum entropy models , from simple iterative estimation algorithms used by Ratnaparkhi ( 1998 ) that converge very slowly , to complex techniques from the optimisation literature that converge much more rapidly ( Malouf , 2002 ) .
Context after the citation:
Other attempts to address efficiency include the fast Transformation Based Learning (TBL) Toolkit (Ngai and Florian, 2001) which dramatically speeds up training TBL systems, and the translation of TBL rules into finite state machines for very fast tagging (Roche and Schabes, 1997). The TNT POS tagger (Brants, 2000) has also been designed to train and run very quickly, tagging between 30,000 and 60,000 words per second. The Weka package (Witten and Frank, 1999) provides a common framework for several existing machine learning methods including decision trees and support vector machines. This library has been very popular because it allows researchers to experiment with different methods without having to modify code or reformat data. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:476 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
According to Budanitsky and Hirst (2006), there are three prevalent approaches for evaluating SR measures: mathematical analysis, applicationspecific evaluation and comparison with human judgments. The knowledge sources used for computing relatedness can be as different as dictionaries, ontologies or large corpora. Various approaches for computing semantic relatedness of words or concepts have been proposed, e.g. dictionary-based (Lesk, 1986), ontology-based (Wu and Palmer, 1994; Leacock and Chodorow, 1998), information-based (Resnik, 1995; Jiang and Conrath, 1997) or distributional (Weeds and Weir, 2005).
Citation Sentence:
According to Budanitsky and Hirst ( 2006 ) , there are three prevalent approaches for evaluating SR measures : mathematical analysis , applicationspecific evaluation and comparison with human judgments .
Context after the citation:
Mathematical analysis can assess a measure with respect to some formal properties, e.g. whether a measure is a metric (Lin, 1998).4 However, mathematical analysis cannot tell us whether a measure closely resembles human judgments or whether it performs best when used in a certain application. The latter question is tackled by applicationspecific evaluation, where a measure is tested within the framework of a certain application, e.g. word sense disambiguation (Patwardhan et al., 2003) or malapropism detection (Budanitsky and Hirst, 2006). Lebart and Rajman (2000) argue for application-specific evaluation of similarity measures, because measures are always used for some task. But they also note that evaluating a measure as part of a usually complex application only indirectly assesses its quality. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:477 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
The choice of learning algorithm for each classifier is motivated by earlier findings showing that discriminative classifiers outperform other machine-learning methods on error correction tasks (Rozovskaya and Roth, 2011). The other system components use the preprocessing tools only as part of candidate generation (e.g., to identify all nouns in the data for the noun classifier). ger2 and shallow parser3 (Punyakanok and Roth, 2001).
Citation Sentence:
The choice of learning algorithm for each classifier is motivated by earlier findings showing that discriminative classifiers outperform other machine-learning methods on error correction tasks ( Rozovskaya and Roth , 2011 ) .
Context after the citation:
Thus, the classifiers trained on the learner data make use of a discriminative model. Because the Google corpus does not contain complete sentences but only n-gram counts of length up to five, training a discriminative model is not desirable, and we thus use NB (details in Rozovskaya and Roth (2011)). The article classifier is a discriminative model that draws on the state-of-the-art approach described in Rozovskaya et al. (2012). The model makes use of the Averaged Perceptron (AP) algorithm (Freund and Schapire, 1996) and is trained on the training data of the shared task with rich features. | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:478 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
A good study comparing document categorization algorithms can be found in (Yang and Liu, 1999). For example, (Fang et al., 2001) discusses the evaluation of two different text categorization strategies with several variations of their feature spaces. The bulk of the text categorization work has been devoted to cope with automatic categorization of English and Latin character documents.
Citation Sentence:
A good study comparing document categorization algorithms can be found in ( Yang and Liu , 1999 ) .
Context after the citation:
More recently, (Sebastiani, 2002) has performed a good survey of document categorization; recent works can also be found in (Joachims, 2002), (Crammer and Singer, 2003), and (Lewis et al., 2004). Concerning Arabic, one automatic categorizer has been reported to have been put under operational use to classify Arabic documents; it is referred to as "Sakhr's categorizer" (Sakhr, 2004). Unfortunately, there is no technical documentation or specification concerning this Arabic categorizer. Sakhr's marketing literature claims that this categorizer is based on Arabic morphology and some research that has been carried out on natural language processing. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:479 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
For example, Gay et al. (2005) experimented with abstracts and full article texts in the task of automatically generating index term recommendations and discovered that using full article texts yields at most a 7.4% improvement in F-score. Although there is a trend towards analysis of full article texts, we believe that abstracts still provide a tremendous amount of information, and much value can still be extracted from them. The ability to explicitly identify these sections in unstructured text could play an important role in applications such as document summarization (Teufel and Moens, 2000), information retrieval (Tbahriti et al., 2005), information extraction (Mizuta et al., 2005), and question answering.
Citation Sentence:
For example , Gay et al. ( 2005 ) experimented with abstracts and full article texts in the task of automatically generating index term recommendations and discovered that using full article texts yields at most a 7.4 % improvement in F-score .
Context after the citation:
Demner-Fushman et al. (2005) found a correlation between the quality and strength of clinical conclusions in the full article texts and abstracts. This paper presents experiments with generative content models for analyzing the discourse structure of medical abstracts, which has been confirmed to follow the four-section pattern discussed above (Salanger-Meyer, 1990). For a variety of reasons, medicine is an interesting domain of research. The need for information systems to support physicians at the point of care has been well studied (Covell et al., 1985; Gorman et al., 1994; Ely et al., 2005). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:48 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
For HMMs (footnote 11), Ti is the familiar trellis, and we would like this computation of ti to reduce to the forwardbackward algorithm (Baum, 1972). ⢠In many cases of interest, Ti is an acyclic graph.20 Then Tar anâs method computes w0j for each j in topologically sorted order, thereby finding ti in a linear number of â and â operations. Efficient hardware implementation is also possible via chip-level parallelism (Rote, 1985).
Citation Sentence:
For HMMs ( footnote 11 ) , Ti is the familiar trellis , and we would like this computation of ti to reduce to the forwardbackward algorithm ( Baum , 1972 ) .
Context after the citation:
But notice that it has no backward pass. In place of pushing cumulative probabilities backward to the arcs, it pushes cumulative arcs (more generally, values in V ) forward to the probabilities. This is slower because our â and â are vector operations, and the vectors rapidly lose sparsity as they are added together. We therefore reintroduce a backward pass that lets us avoid â and â when computing ti (so they are needed only to construct Ti). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:480 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
We follow Stymne and Cancedda (2011), for compound merging. This approach resulted in too many compounds. Popovic et al. (2006) used a simple, list-based merging approach, merging all consecutive words included in a merging list.
Citation Sentence:
We follow Stymne and Cancedda ( 2011 ) , for compound merging .
Context after the citation:
We trained a CRF using (nearly all) of the features they used and found their approach to be effective (when combined with inflection and portmanteau merging) on one of our two test sets. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:481 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Semantic filters can also be used to prevent multiple versions of the same case frame (Fillmore 1968) showing up as complements. In this case, the verb has no information concerning its subject, and so it identifies it as an unbound pronoun. Finally, in the case of passive voice, the CURRENT-FOCUS slot is empty at the time the verb is proposed, because the CURRENT-FOCUS which was the surface-form subject has been moved to the float-object position.
Citation Sentence:
Semantic filters can also be used to prevent multiple versions of the same case frame ( Fillmore 1968 ) showing up as complements .
Context after the citation:
For instance, the set of complements [from-place], [to-place], and [at-time] are freely ordered following a movement verb such as "leave." Thus a flight can "leave for Chicago from Boston at nine," or, equivalently, "leave at nine for Chicago from Boston." If these complements are each allowed to follow the other, then in TINA an infinite sequence of [from-place's, [to-place]s and [at-time]s is possible. This is of course unacceptable, but it is straightforward to have each node, as it occurs, or in a semantic bit specifying its case frame, and, in turn, fail if that bit has already been set. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:482 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The terms have been identified as the most specific to our corpus by a program developed by Drouin (2003) and called TER1vloSTAT. CompuTerm 2004 3rd International Workshop on Computational Terminology 43 To construct this test set, we have focused our attention on ten domain-specific terms: commande (command), configuration, fichier (file), Internet, logiciel (software), option, ordinateur (computer), serveur (server), systeme (system), utilisateur (user). These two rates were evaluated using a test sample containing all this information.
Citation Sentence:
The terms have been identified as the most specific to our corpus by a program developed by Drouin ( 2003 ) and called TER1vloSTAT .
Context after the citation:
The ten most specific nouns have been produced by comparing our corpus of computing to the French corpus Le Monde, composed of newspaper articles (Lemay et al., 2004). Note that to prevent any bias in the results, none of these terms were used as positive examples during the pattern inference step. (They were removed from the example set.) For each of these 10 nouns, a manual identification of valid and invalid pairs was carried out. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:483 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
For statistical significance, we use McNemarâs test on non-gold LAS, as implemented by Nilsson and Nivre (2008). Unlabeled attachment accuracy score (UAS) and label accuracy (dependency relation regardless of parent, LS) are also given. All results are reported mainly in terms of labeled attachment accuracy score (the parent word and the type of dependency relation to it, abbreviated as LAS), which is also used for greedy (hill-climbing) decisions for feature combination.
Citation Sentence:
For statistical significance , we use McNemar 's test on non-gold LAS , as implemented by Nilsson and Nivre ( 2008 ) .
Context after the citation:
We denote p < 0.05 and p < 0.01 with + and ++, respectively. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:484 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
The availability of toolkits for this weighted case (Mohri et al., 1998; van Noord and Gerdemann, 2001) promises to unify much of statistical NLP. An artificial example will appear in §2. Such models can be efficiently restricted, manipulated or combined using rational operations as before.
Citation Sentence:
The availability of toolkits for this weighted case ( Mohri et al. , 1998 ; van Noord and Gerdemann , 2001 ) promises to unify much of statistical NLP .
Context after the citation:
Such tools make it easy to run most current approaches to statistical markup, chunking, normalization, segmentation, alignment, and noisy-channel decoding,' including classic models for speech recognition (Pereira and Riley, 1997) and machine translation (Knight and Al-Onaizan, 1998). Moreover, once the models are expressed in the finitestate framework, it is easy to use operators to tweak them, to apply them to speech lattices or other sets, and to combine them with linguistic resources. Unfortunately, there is a stumbling block: Where do the weights come from? After all, statistical models require supervised or unsupervised training. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:485 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
tionally reconstructed by Alshawi and Crouch (1992) and Crouch and Putman (1994), the context-independent meaning of a sentence is given by one or more QLFs that are built directly from syntactic and semantic rules. In the CLE-QLF approach, as ra- The starting point for the approach followed here was a dissatisfaction with certain aspects of the theory of quasi-logical form as described in Alshawi (1990, 1992), and implemented in SRI's Core Language Engine (CLE).
Citation Sentence:
tionally reconstructed by Alshawi and Crouch ( 1992 ) and Crouch and Putman ( 1994 ) , the context-independent meaning of a sentence is given by one or more QLFs that are built directly from syntactic and semantic rules .
Context after the citation:
Just as here, these QLFs represent the basic predicate argument structure of the sentence, and contain constructs which represent those aspects of the meaning of the sentence that are dependent on context. The effects of contextual resolution are uniformly represented via the instantiation of metavariables. This instantiation is brought about by the operation of resolution rules, which are essentially user-defined Prolog predicates finding appropriate instantiations for metavariables from the current context. Contextual resolution is therefore a process of adding information to an underspecified meaning representation until it is sufficiently specified for the task at hand. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:486 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
We use a standard split of 268 training documents, 68 development documents, and 106 testing documents (Culotta et al., 2007; Bengtson and Roth, 2008). Datasets The ACE-2004 dataset contains 443 documents.
Citation Sentence:
We use a standard split of 268 training documents , 68 development documents , and 106 testing documents ( Culotta et al. , 2007 ; Bengtson and Roth , 2008 ) .
Context after the citation:
The OntoNotes-5.0 dataset, which is released for the CoNLL-2012 Shared Task (Pradhan et al., 2012), contains 3,145 annotated documents. These documents come from a wide range of sources which include newswire, bible, transcripts, magazines, and web blogs. We report results on the test documents for both datasets. The ACE-2004 dataset is annotated with both mention and mention heads, while the OntoNotes5.0 dataset only has mention annotations. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:487 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Bojar and Kos (2010) improved on this by marking prepositions with the case they mark (one of the most important markups in our system). Fraser (2009) tried to solve the inflection prediction problem by simply building an SMT system for translating from stems to inflected forms. We use more complex context features.
Citation Sentence:
Bojar and Kos ( 2010 ) improved on this by marking prepositions with the case they mark ( one of the most important markups in our system ) .
Context after the citation:
Both efforts were ineffective on large data sets. Williams and Koehn (2011) used unification in an SMT system to model some of the agreement phenomena that we model. Our CRF framework allows us to use more complex context features. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:488 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
But typical OT grammars offer much richer finite-state models of left context (Eisner 1997a) than provided by the traditional HMM finite-state topologies. Just like OT grammars, HMM Viterbi decoders are functions that pick the optimal output from E*, based on criteria of well-formedness (transition probabilities) and faithfulness to the input (emission probabilities). For example, consider the relevance to hidden Markov models (HMMs), another restricted class of Gibbs distributions used in speech recognition or part-of-speech tagging.
Citation Sentence:
But typical OT grammars offer much richer finite-state models of left context ( Eisner 1997a ) than provided by the traditional HMM finite-state topologies .
Context after the citation:
Now, among methods that use a Gibbs distribution to choose among linguistic forms, OT generation is special in that the distribution ranks the features strictly, rather than weighting them in a gentler way that allows tradeoffs. When is this appropriate? It seems to me that there are three possible uses. First, there are categorical phenomena for which strict feature ranking may genuinely suffice. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:489 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The ten most specific nouns have been produced by comparing our corpus of computing to the French corpus Le Monde, composed of newspaper articles (Lemay et al., 2004). The terms have been identified as the most specific to our corpus by a program developed by Drouin (2003) and called TER1vloSTAT. CompuTerm 2004 3rd International Workshop on Computational Terminology 43 To construct this test set, we have focused our attention on ten domain-specific terms: commande (command), configuration, fichier (file), Internet, logiciel (software), option, ordinateur (computer), serveur (server), systeme (system), utilisateur (user).
Citation Sentence:
The ten most specific nouns have been produced by comparing our corpus of computing to the French corpus Le Monde , composed of newspaper articles ( Lemay et al. , 2004 ) .
Context after the citation:
Note that to prevent any bias in the results, none of these terms were used as positive examples during the pattern inference step. (They were removed from the example set.) For each of these 10 nouns, a manual identification of valid and invalid pairs was carried out. Linguists were asked to analyze the sentences and decide whether the highlighted pairs were valid N-V pairs. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:49 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Other tools have been designed around particular techniques, such as finite state machines (Karttunen et al., 1997; Mohri et al., 1998). This gives a greater flexibility but the tradeoff is that these tools can run very slowly. These tools also store their configuration state, e.g. the transduction rules used in LT CHUNK, in XML configuration files.
Citation Sentence:
Other tools have been designed around particular techniques , such as finite state machines ( Karttunen et al. , 1997 ; Mohri et al. , 1998 ) .
Context after the citation:
However, the source code for these tools is not freely available, so they cannot be extended. Efficiency has not been a focus for NLP research in general. However, it will be increasingly important as techniques become more complex and corpus sizes grow. An example of this is the estimation of maximum entropy models, from simple iterative estimation algorithms used by Ratnaparkhi (1998) that converge very slowly, to complex techniques from the optimisation literature that converge much more rapidly (Malouf, 2002). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:490 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
This contrasts with the findings described in Rapp & Zock (2010) where significant improvements could be achieved by increasing the number of source languages. As can be seen, with an average score of 51.8 the improvement over the English only variant (50.6) is minimal. 3) Little improvement for several source words The right column in Table 1 shows the scores if (using the product-of-ranks algorithm) four source languages are taken into account in parallel.
Citation Sentence:
This contrasts with the findings described in Rapp & Zock ( 2010 ) where significant improvements could be achieved by increasing the number of source languages .
Context after the citation:
So this casts some doubt on these. However, as English was not considered as a source language there, the performance levels were mostly between 10 and 20, leaving much room for improvement. This is not the case here, where we try to improve on a score of around 50 for English. Remember that this is a somewhat conservative score as we count correct but alternative translations, as errors. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:491 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
The function selects the Value that removes most distractors, but in case of a tie, the least specific contestant is chosen, as long as it is not less specific than the basic-level Value (i.e., the most commonly occurring and psychologically most fundamental level, Rosch 1978). FindBestValue selects the âbest valueâ from among the Values of a given Attribute, assuming that these are linearly ordered in terms of specificity. The expansion of L and the contraction of C continue until C = S:
Citation Sentence:
The function selects the Value that removes most distractors , but in case of a tie , the least specific contestant is chosen , as long as it is not less specific than the basic-level Value ( i.e. , the most commonly occurring and psychologically most fundamental level , Rosch 1978 ) .
Context after the citation:
IAPlur can refer to individuals as well as sets, since reference to a target individual r can be modeled as reference to the singleton set {r}. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:492 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Budanitsky and Hirst (2006) pointed out that distribution plots of judgments for the word pairs used by Rubenstein and Goodenough display an empty horizontal band that could be used to separate related and unrelated pairs. However, even with the present setup, automatic extraction of concept pairs performs remarkably well and can be used to quickly create balanced test datasets. ing could be used.
Citation Sentence:
Budanitsky and Hirst ( 2006 ) pointed out that distribution plots of judgments for the word pairs used by Rubenstein and Goodenough display an empty horizontal band that could be used to separate related and unrelated pairs .
Context after the citation:
This empty band is not observed here. However, Figure 4 shows the distribution of averaged judgments with the highest agreement between annotators (standard deviation < 0.8). The plot clearly shows an empty horizontal band with no judgments. The connection between averaged judgments and standard deviation is plotted in Figure 5. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:493 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
In fact, Reiter has even argued in favor of this approach, claiming that the interactions are sufficiently minor to be ignored (or at least handled on an ad hoc basis) (Reiter 1994). This approach is the one taken (implicitly or explicitly) in the majority of generators. Whatever problems result will be handled as best they can, on a case-by-case basis.
Citation Sentence:
In fact , Reiter has even argued in favor of this approach , claiming that the interactions are sufficiently minor to be ignored ( or at least handled on an ad hoc basis ) ( Reiter 1994 ) .
Context after the citation:
While this certainly has appeal as a design methodology, it seems reckless to assume that problems will never appear. Certainly an approach to generation that does handle these interactions would be an improvement, as long as it didn't require abandoning modularity. There have in fact been attempts to develop modified modular designs that allow generators to handle interactions between the components. These include devices such as interleaving the components (McDonald 1983; Appelt 1983), backtracking on failure (Appelt 1985; Nogier 1989), allowing the linguistic component to interrogate the planner (Mann 1983; Sondheimer and Nebel 1986), and Hovy's notion of restrictive (i.e., bottom-up) planning (Hovy 1988a, 1988c). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:494 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
The method is called targeted self-training as it is similar in vein to self-training (McClosky et al., 2006), with the exception that the new parse data is targeted to produce accurate word reorderings. The parse with the best reordering score is then fixed and added back to the training set and a new parser is trained on resulting data. In that work, a parser is used to first parse a set of manually reordered sentences to produce k-best lists.
Citation Sentence:
The method is called targeted self-training as it is similar in vein to self-training ( McClosky et al. , 2006 ) , with the exception that the new parse data is targeted to produce accurate word reorderings .
Context after the citation:
Our method differs as it does not statically fix a new parse, but dynamically updates the parameters and parse selection by incorporating the additional loss in the inner loop of online learning. This allows us to give guarantees of convergence. Furthermore, we also evaluate the method on alternate extrinsic loss functions. Liang et al. (2006) presented a perceptron-based algorithm for learning the phrase-translation parameters in a statistical machine translation system. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:495 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
In our previous work (Zhang and Chai, 2009), we started an initial investigation on conversation entailment. Here we address a different angle regarding conversation scripts, namely conversation entailment. Recent studies have also developed approaches to summarize conversations (Murray and Carenini, 2008) and to model conversation structures (dialogue acts) from online Twitter conversations (Ritter et al., 2010).
Citation Sentence:
In our previous work ( Zhang and Chai , 2009 ) , we started an initial investigation on conversation entailment .
Context after the citation:
We have collected a dataset of 875 instances. Each instance consists of a conversation segment and a hypothesis (as described in Section 1). The hypotheses are statements about conversation participants and are further categorized into four types: about their profile information, their beliefs and opinions, their desires, and their communicative intentions. We developed an approach that is motivated by previous work on textual entailment. | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:496 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Bashir (1993) tried to construct a semantic analysis based on âpreparedâ and âunprepared mindâ. Butt (1993) argues CV formations in Hindi and Urdu are either morphological or syntactical and their formation take place at the argument structure. Hook (1981) considers the second verb V2 as an aspectual complex comparable to the auxiliaries.
Citation Sentence:
Bashir ( 1993 ) tried to construct a semantic analysis based on `` prepared '' and `` unprepared mind '' .
Context after the citation:
Similar findings have been proposed by Pandharipande (1993) that points out V1 and V2 are paired on the basis of their semantic compatibility, which is subject to syntactic constraints. Paul (2004) tried to represent Bangla CVs in terms of HPSG formalism. She proposes that the selection of a V2 by a V1 is determined at the semantic level because the two verbs will unify if and only if they are semantically compatible. Since none of the linguistic formalism could satisfactorily explain the unique phenomena of CV formation, we here for the first time drew our attention towards psycholinguistic and neurolinguistic studies to model the processing of verb-verb combinations in the ML and compare these responses with that of the existing models. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:497 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
It is also possible to focus on non-compositional compounds, a key point in bilingual applications (Su et al., 1994; Melamed, 1997; Lin, 99). One way to increase the precision of the mapping process is to impose some linguistic constraints on the sequences such as simple noun-phrase contraints (Gaussier, 1995; Kupiec, 1993; hua Chen and Chen, 94; Fung, 1995; Evans and Zhai, 1996).
Citation Sentence:
It is also possible to focus on non-compositional compounds , a key point in bilingual applications ( Su et al. , 1994 ; Melamed , 1997 ; Lin , 99 ) .
Context after the citation:
Another interesting approach is to restrict sequences to those that do not cross constituent boundary patterns (Wu, 1995; Furuse and Iida, 96). In this study, we filtered for potential sequences that are likely to be noun phrases, using simple regular expressions over the associated part-of-speech tags. An excerpt of the association probabilities of a unit model trained considering only the NP-sequences is given in table 3. Applying this filter (referred to as .7. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:498 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
This is mainly due to the fact that Arabic is a non-concatenative language (Al-Shalabi and Evens, 1998), and that the stem/infix obtained by suppression of infix and prefix add-ons is not the same for words derived from the same origin called the root. In Arabic, however, the use of stems will not yield satisfactory categorization. Then roots are extracted for words in the document.
Citation Sentence:
This is mainly due to the fact that Arabic is a non-concatenative language ( Al-Shalabi and Evens , 1998 ) , and that the stem/infix obtained by suppression of infix and prefix add-ons is not the same for words derived from the same origin called the root .
Context after the citation:
The infix form (or stem) needs further to be processed in order to obtain the root. This processing is not straightforward: it necessitates expert knowledge in Arabic language word morphology (Al-Shalabi and Evens, 1998). As an example, two close roots (i.e., roots made of the same letters), but semantically different, can yield the same infix form thus creating ambiguity. The root extraction process is concerned with the transformation of all Arabic word derivatives to their single common root or canonical form. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:499 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
conclusion
Context before the citation:
This is roughly an 11% relative reduction in error rate over Charniak (2000) and Bods PCFG-reduction reported in Table 1. The highest accuracy is obtained by SL-DOP at 12 n 14: an LP of 90.8% and an LR of 90.7%. But while the accuracy of SL-DOP decreases after n=14 and converges to Simplicity DOP, the accuracy of LS-DOP continues to increase and converges to Likelihood-DOP.
Citation Sentence:
This is roughly an 11 % relative reduction in error rate over Charniak ( 2000 ) and Bods PCFG-reduction reported in Table 1 .
Context after the citation:
Compared to the reranking technique in Collins (2000), who obtained an LP of 89.9% and an LR of 89.6%, our results show a 9% relative error rate reduction. While SL-DOP and LS-DOP have been compared before in Bod (2002), especially in the context of musical parsing, this paper presents the The DOP approach is based on two distinctive features: (1) the use of corpus fragments rather than grammar rules, and (2) the use of arbitrarily large fragments rather than restricted ones. While the first feature has been generally adopted in statistical NLP, the second feature has for a long time been a serious bottleneck, as it results in exponential processing time when the most probable parse tree is computed. This paper showed that a PCFG-reduction of DOP in combination with a new notion of the best parse tree results in fast processing times and very competitive accuracy on the Wall Street Journal treebank. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:5 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
Secondly, we need to investigate techniques for identifying identical documents, virtually identical documents and highly repetitive documents, such as those pioneered by Fletcher (2004b) and shingling techniques described by Chakrabarti (2002). From a computational linguistic view, the framework will also need to take into account the granularity of the unit (for example, POS tagging requires sentence-units, but anaphoric annotation needs paragraphs or larger). ⢠Site based corpus annotation in which the user can specify a web site to annotate ⢠Domain based corpus annotation in which the user specifies a content domain (with the use of keywords) to annotate ⢠Crawler based corpus annotation more general web based corpus annotation in which crawlers are used to locate web pages
Citation Sentence:
Secondly , we need to investigate techniques for identifying identical documents , virtually identical documents and highly repetitive documents , such as those pioneered by Fletcher ( 2004b ) and shingling techniques described by Chakrabarti ( 2002 ) .
Context after the citation:
The second stage of our work will involve implementing the framework within a P2P environment. We have already developed a prototype of an object-oriented application environment to support P2P system development using JXTA (Sun's P2P API). We have designed this environment so that specific application functionality can be captured within plug-ins that can then integrate with the environment and utilise its functionality. | FutureWork | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:50 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
ear regression adapted for classification (Ting and Witten 1999), which can be described by the following equation: 4 http://mallet.cs.umass.edu/ The second involved a more principled method using confidence values generated by the base classifiers and least squares lin-
Citation Sentence:
ear regression adapted for classification ( Ting and Witten 1999 ) , which can be described by the following equation :
Context after the citation:
Pk is the probability that a sentence specifies an outcome, as determined by classifier k (for classifiers that do not return actual probabilities, we normalized the scores and treated them as such). To predict the class of a sentence, the probabilities generated by n classifiers are combined using the coefficients (α0, ..., αn). These values are determined in the training stage as follows: Probabilities predicted by base classifiers for each sentence are represented in an N x M matrix A, where M is the number of sentences in the training set, and N is the number of classifiers. The gold standard class assignments for each sentence is stored in a vector b, and weights are found by computing the vector α that minimizes ||Aα â b||. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:500 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
For example, the forward-backward algorithm (Baum, 1972) trains only Hidden Markov Models, while (Ristad and Yianilos, 1996) trains only stochastic edit distance. Not only do these methods require additional programming outside the toolkit, but they are limited to particular kinds of models and training regimens. Currently, finite-state practitioners derive weights using exogenous training methods, then patch them onto transducer arcs.
Citation Sentence:
For example , the forward-backward algorithm ( Baum , 1972 ) trains only Hidden Markov Models , while ( Ristad and Yianilos , 1996 ) trains only stochastic edit distance .
Context after the citation:
In short, current finite-state toolkits include no training algorithms, because none exist for the large space of statistical models that the toolkits can in principle describe and run. 'Given output, find input to maximize P(input, output). This paper aims to provide a remedy through a new paradigm, which we call parameterized finitestate machines. It lays out a fully general approach for training the weights of weighted rational relations. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:501 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Using the tree-cut technique described above, our previous work (Tomuro, 2000) extracted systematic polysemy from WordNet.
Citation Sentence:
Using the tree-cut technique described above , our previous work ( Tomuro , 2000 ) extracted systematic polysemy from WordNet .
Context after the citation:
In this section, we give a summary of this method, and describe the cluster pairs obtained by the method. 4For justification and detailed explanation of these formulas, see (Li and Abe, 1998). 5 In our previous work, we used entropy instead of NILE. That is because the lexicon represents true population, not samples; thus there is no additional data to estimate. | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:502 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
(Och and Ney, 2002; Blunsom et al., 2008) used maximum likelihood estimation to learn weights for MT. (Och, 2003; Moore and Quirk, 2008; Zhao and Chen, 2009; Galley and Quirk, 2011) employed an evaluation metric as a loss function and directly optimized it. Several works have proposed discriminative techniques to train log-linear model for SMT.
Citation Sentence:
( Och and Ney , 2002 ; Blunsom et al. , 2008 ) used maximum likelihood estimation to learn weights for MT. ( Och , 2003 ; Moore and Quirk , 2008 ; Zhao and Chen , 2009 ; Galley and Quirk , 2011 ) employed an evaluation metric as a loss function and directly optimized it .
Context after the citation:
(Watanabe et al., 2007; Chiang et al., 2008; Hopkins and May, 2011) proposed other optimization objectives by introducing a margin-based and ranking-based indirect loss functions. All the methods mentioned above train a single weight for the whole development set, whereas our local training method learns a weight for each sentence. Further, our translation framework integrates the training and testing into one unit, instead of treating them separately. One of the advantages is that it can adapt the weights for each of the test sentences. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:503 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
5 Significant bigrams are obtained using the n-gram statistics package NSP (Banerjee and Pedersen 2003), which offers statistical tests to decide whether to accept or reject the null hypothesis regarding a bigram (that it is not a collocation). Nonetheless, in the future, it may be worth investigating a TF.IDF-based representation. Thus, their low TF.IDF score may have an adverse influence on clustering performance.
Citation Sentence:
5 Significant bigrams are obtained using the n-gram statistics package NSP ( Banerjee and Pedersen 2003 ) , which offers statistical tests to decide whether to accept or reject the null hypothesis regarding a bigram ( that it is not a collocation ) .
Context after the citation:
As discussed in Section 2, there are situations that cannot be addressed by a document-level approach, because requests only predict or match portions of responses. An alternative approach is to look for promising sentences from one or more previous responses, and collate them into a new response. This task can be cast as extractive multi-document summarization. Unlike a document reuse approach, sentence-level approaches need to consider issues of discourse coherence in order to ensure that the extracted combination of sentences is coherent or at least understandable. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:504 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Liu et al. (2005), Meral et al. (2007), Murphy (2001), Murphy and Vogel (2007) and Topkara et al. (2006a) all belong to the syntactic transformation category. In other words, instead of performing lexical substitution directly to the text, the secret message is embedded into syntactic parse trees of the sentences. Later, Atallah et al. (2001b) embedded information in the tree structure of the text by adjusting the structural properties of intermediate representations of sentences.
Citation Sentence:
Liu et al. ( 2005 ) , Meral et al. ( 2007 ) , Murphy ( 2001 ) , Murphy and Vogel ( 2007 ) and Topkara et al. ( 2006a ) all belong to the syntactic transformation category .
Context after the citation:
After embedding the secret message, modified deep structure forms are converted into the surface structure format via language generation tools. Atallah et al. (2001b) and Topkara et al. (2006a) attained the embedding capacity of 0.5 bits per sentence with the syntactic transformation method. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:505 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
2The WePS-1 corpus includes data from the Web03 testbed (Mann, 2006) which follows similar annotation guidelines, although the number of document per ambiguous name is more variable. We generated word n-grams of length 2 to 5, English stopwords were removed, including Web specific stopwords, as file and domain extensions, etc.
Citation Sentence:
2The WePS-1 corpus includes data from the Web03 testbed ( Mann , 2006 ) which follows similar annotation guidelines , although the number of document per ambiguous name is more variable .
Context after the citation:
3Both corpora are available from the WePS website http://nlp.uned.es/weps 4A very sparse feature might never occur in a sentence with the person name. In that cases there is no local version of the feature. using the sentences found in the document text. Punctuation tokens (commas, dots, etc) were generalised as the same token. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:506 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
The local training method (Bottou and Vapnik, 1992) is widely employed in computer vision (Zhang et al., 2006; Cheng et al., 2010). 2 Local Training and Testing Experiments on NIST Chinese-to-English translation tasks show that our local training method significantly gains over MERT, with the maximum improvements up to 2.0 BLEU, and its efficiency is comparable to that of the global training method.
Citation Sentence:
The local training method ( Bottou and Vapnik , 1992 ) is widely employed in computer vision ( Zhang et al. , 2006 ; Cheng et al. , 2010 ) .
Context after the citation:
Compared with the global training method which tries to fit a single weight on the training data, the local one learns weights based on the local neighborhood information for each test example. It is superior to the global one when the data sets are not evenly distributed (Bottou and Vapnik, 1992; Zhang et al., 2006). Algorithm 1 Naive Local Training Method Input: T = {ti}Ni=1(test set), K (retrieval size), Dev(development set), D(retrieval data) Output: Translation results of T | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:507 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Recent work (Banko and Brill, 2001; Curran and Moens, 2002) has suggested that some tasks will benefit from using significantly more data. However, the greatest increase is in the amount of raw text available to be processed, e.g. the English Gigaword Corpus (Linguistic Data Consortium, 2003). This will require more efficient learning algorithms and implementations.
Citation Sentence:
Recent work ( Banko and Brill , 2001 ; Curran and Moens , 2002 ) has suggested that some tasks will benefit from using significantly more data .
Context after the citation:
Also, many potential applications of NLP will involve processing very large text databases. For instance, biomedical text-mining involves extracting information from the vast body of biological and medical literature; and search engines may eventually apply NLP techniques to the whole web. Other potential applications must process text online or in realtime. For example, Google currently answers 250 million queries per day, thus processing time must be minimised. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:508 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Rather than producing a complete analysis of sentences, the alternative is to perform only partial analysis of the syntactic structures in a text (Harris, 1957; Abney, 1991; Greffenstette, 1993). Shallow parsing is studied as an alternative to full-sentence parsing.
Citation Sentence:
Rather than producing a complete analysis of sentences , the alternative is to perform only partial analysis of the syntactic structures in a text ( Harris , 1957 ; Abney , 1991 ; Greffenstette , 1993 ) .
Context after the citation:
A lot of recent work on shallow parsing has been influenced by Abneyâs work (Abney, 1991), who has suggested to âchunkâ sentences to base level phrases. For example, the sentence âHe reckons the current account deficit will narrow to only $ 1.8 billion in September .â would be chunked as follows (Tjong Kim Sang and Buchholz, 2000): [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP âThis research is supported by NSF grants IIS-9801638, ITR-IIS-0085836 and an ONR MURI Award. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:509 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Following the work of Stymne and Cancedda (2011), we implement a linear-chain CRF merging system using the following features: stemmed (separated) surface form, part-of-speech14 and frequencies from the training corpus for bigrams/merging of word and word+1, word as true prefix, word+1 as true suffix, plus frequency comparisons of these. merge and ii) how to merge. Two decisions have to be taken: i) where to
Citation Sentence:
Following the work of Stymne and Cancedda ( 2011 ) , we implement a linear-chain CRF merging system using the following features : stemmed ( separated ) surface form , part-of-speech14 and frequencies from the training corpus for bigrams/merging of word and word +1 , word as true prefix , word +1 as true suffix , plus frequency comparisons of these .
Context after the citation:
The CRF is trained on the split monolingual data. It only proposes merging decisions, merging itself uses a list extracted from the monolingual data (Popovic et al., 2006). | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:51 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
The feasibility of automatically identifying outcome statements in secondary sources has been demonstrated by Niu and Hirst (2004). The work of Hearst (1996) demonstrates that faceted queries can be converted into simple filtering constraints to boost precision. PICO-based querying in information retrieval is merely an instance of faceted querying, which has been widely used by librarians since the introduction of automated retrieval systems (e.g., Meadow et al. 1989).
Citation Sentence:
The feasibility of automatically identifying outcome statements in secondary sources has been demonstrated by Niu and Hirst ( 2004 ) .
Context after the citation:
Their study also illustrates the importance of semantic classes and relations. However, extraction of outcome statements from secondary sources (meta-analyses, in this case) differs from extraction of outcomes from MEDLINE citations because secondary sources represent knowledge that has already been distilled by humans (which may limit its scope). Because secondary sources are often more consistently organized, it is possible to depend on certain surface cues for reliable extraction (which is not possible for MEDLINE abstracts in general). Our study tackles outcome identification in primary medical sources and demonstrates that respectable performance is possible with a feature-combination approach. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:510 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The standard approach is to train two models independently and then intersect their predictions (Och and Ney 2003). In practice, we see that the choice of which language is source versus target matters and changes the mistakes made by the model (the first row of panels in Figure 1). The directional nature of the generative models used to recover word alignments conflicts with their interpretation as translations.
Citation Sentence:
The standard approach is to train two models independently and then intersect their predictions ( Och and Ney 2003 ) .
Context after the citation:
However, we show that it is much better to train two directional models concurrently, coupling their posterior distributions over alignments to approximately agree. Let the directional models be defined as: â � p (â � z ) (sourceâtarget) and �â p ( �â z ) (targetâsource). We suppress dependence on x and y for brevity. Define z to range over the union of all possible | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:511 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
We use the same method as Andrews et al. (2009) for generating our multimodal corpora: for each word token in the text corpus, a feature is selected stochastically from the wordâs feature distribution, creating a word-feature pair. In order to evaluate our algorithms, we first need to generate multimodal corpora for each of our nontextual modalities.
Citation Sentence:
We use the same method as Andrews et al. ( 2009 ) for generating our multimodal corpora : for each word token in the text corpus , a feature is selected stochastically from the word 's feature distribution , creating a word-feature pair .
Context after the citation:
Words without grounded features are all given the same placeholder feature, also resulting in a wordfeature pair.5 That is, for the feature norm modality, we generate (word, feature norm) pairs; for the SURF modality, we generate (word, codeword) pairs, etc. The resulting stochastically generated corpus is used in its corresponding experiments. The 3D text-feature-association norm corpus is generated slightly differently: for each word in the original text corpus, we check the existence of multimodal features in either modality. If a word had no features, it is represented as a triple (word, placeholderFN, placeholderAN). | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:512 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
MI was also recently used for inference-rule SPs by Pantel et al. (2007). Normalizing by Pr(v) (yielding MI) allows us to use a constant threshold across all verbs. Thus rather than a single training procedure, we can actually partition the examples by predicate, and train a 1For a fixed verb, MI is proportional to Keller and Lapata (2003)âs conditional probability scores for pseudodisambiguation of (v, n, nâ²) triples: Pr(v|n) = Pr(v, n)/Pr(n), which was shown to be a better measure of association than co-occurrence frequency f(v, n).
Citation Sentence:
MI was also recently used for inference-rule SPs by Pantel et al. ( 2007 ) .
Context after the citation:
classifier for each predicate independently. The prediction becomes yv = Av · 4)v(n), where Av are the learned weights corresponding to predicate v and all features 4)v(n)=f(n) depend on the argument only. Some predicate partitions may have insufficient examples for training. Also, a predicate may occur in test data that was unseen during training. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:513 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
conclusion
Context before the citation:
Based on this advise (Moore and Quirk, 2007) exclude the latent segmentation variables and opt for a heuristic training procedure. The most similar efforts to ours, mainly (DeNero et al., 2006), conclude that segmentation variables in the generative translation model lead to overfitting while attaining higher likelihood of the training data than the heuristic estimator.
Citation Sentence:
Based on this advise ( Moore and Quirk , 2007 ) exclude the latent segmentation variables and opt for a heuristic training procedure .
Context after the citation:
In this work we also start out from a generative model with latent segmentation variables. However, we find out that concentrating the learning effort on smoothing is crucial for good performance. For this, we devise ITG-based priors over segmentations and employ a penalized version of Deleted Estimation working with EM at its core. The fact that our results (at least) match the heuristic estimates on a reasonably sized data set (947k parallel sentence pairs) is rather encouraging. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:514 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
For example, 10 million words of the American National Corpus (Ide et al., 2002) will have manually corrected POS tags, a tenfold increase over the Penn Treebank (Marcus et al., 1993), currently used for training POS taggers. Some of this new data will be manually annotated. NLP is experiencing an explosion in the quantity of electronic text available.
Citation Sentence:
For example , 10 million words of the American National Corpus ( Ide et al. , 2002 ) will have manually corrected POS tags , a tenfold increase over the Penn Treebank ( Marcus et al. , 1993 ) , currently used for training POS taggers .
Context after the citation:
This will require more efficient learning algorithms and implementations. However, the greatest increase is in the amount of raw text available to be processed, e.g. the English Gigaword Corpus (Linguistic Data Consortium, 2003). Recent work (Banko and Brill, 2001; Curran and Moens, 2002) has suggested that some tasks will benefit from using significantly more data. Also, many potential applications of NLP will involve processing very large text databases. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:515 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The forward and backward probabilities, p0j and pkn, can be computed using single-source algebraic path for the simpler semiring (R, +, x, â)âor equivalently, by solving a sparse linear system of equations over R, a much-studied problem at O(n) space, O(nm) time, and faster approximations (Greenbaum, 1997). Write wjk as (pjk, vjk), and let w1jk = (p1jk, v1 jk) denote the weight of the edge from j to k.19 Then it can be shown that w0n = (p0n, Ej,k p0jv1jkpkn). This speedup also works for cyclic graphs and for any V .
Citation Sentence:
The forward and backward probabilities , p0j and pkn , can be computed using single-source algebraic path for the simpler semiring ( R , + , x , â ) -- or equivalently , by solving a sparse linear system of equations over R , a much-studied problem at O ( n ) space , O ( nm ) time , and faster approximations ( Greenbaum , 1997 ) .
Context after the citation:
⢠A Viterbi variant of the expectation semiring exists: replace (3) with if(p1 > p2, (p1, v1), (p2, v2)). Here, the forward and backward probabilities can be computed in time only O(m + n log n) (Fredman and Tar an, 1987). k-best variants are also possible. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:516 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
4To prove (1)â(3), express f as an FST and apply the well-known Kleene-Sch¨utzenberger construction (Berstel and Reutenauer, 1988), taking care to write each regexp in the construction as a constant times a probabilistic regexp. The general form is illustrated by 3Conceptually, the parameters represent the probabilities of reading another a (A); reading another b (ν); transducing b to p rather than q (µ); starting to transduce p to a rather than x (p). A central technique is to define a joint relation as a noisy-channel model, by composing a joint relation with a cascade of one or more conditional relations as in Fig. 1 (Pereira and Riley, 1997; Knight and Graehl, 1998).
Citation Sentence:
4To prove ( 1 ) â ( 3 ) , express f as an FST and apply the well-known Kleene-Sch ¨ utzenberger construction ( Berstel and Reutenauer , 1988 ) , taking care to write each regexp in the construction as a constant times a probabilistic regexp .
Context after the citation:
A full proof is straightforward, as are proofs of (3)â(2), (2)â(1). 5In (4), the randomness is in the smaller relationâs choice of how to replace a match. One can also get randomness through the choice of matches, ignoring match possibilities by randomly deleting markers in Gerdemann and van Noordâs construction. P(v, z) def = Ew,x,y P(v|w)P(w, x)P(y|x)P(z|y), implemented by composing 4 machines.6,7 There are also procedures for defining weighted FSTs that are not probabilistic (Berstel and Reutenauer, 1988). | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:517 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Second, software for utilizing this ontology already exists: MetaMap (Aronson 2001) identifies concepts in free text, and SemRep (Rindflesch and Fiszman 2003) extracts relations between the concepts. First, substantial understanding of the domain has already been codified in the Unified Medical Language System (UMLS) (Lindberg, Humphreys, and McCray 1993). This domain is well-suited for exploring the posed research questions for several reasons.
Citation Sentence:
Second , software for utilizing this ontology already exists : MetaMap ( Aronson 2001 ) identifies concepts in free text , and SemRep ( Rindflesch and Fiszman 2003 ) extracts relations between the concepts .
Context after the citation:
Both systems utilize and propagate semantic information from UMLS knowledge sources: the Metathesaurus, the Semantic Network, and the SPECIALIST lexicon. The 2004 version of the UMLS Metathesaurus (used in this work) contains information about over 1 million biomedical concepts and 5 million concept names from more than 100 controlled vocabularies. The Semantic Network provides a consistent categorization of all concepts represented in the UMLS Metathesaurus. Third, the paradigm of evidence-based medicine (Sackett et al. 2000) provides a task-based model of the clinical information-seeking process. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:518 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Our work is more similar to NLG work that concentrates on structural constraints such as generative poetry (Greene et al., 2010) (Colton et al., 2012) (Jiang and Zhou, 2008) or song lyrics (Wu et al., 2013) (Ramakrishnan A et al., 2009), where specified meter or rhyme schemes are enforced. The majority of NLG focuses on the satisfaction of a communicative goal, with examples such as Belz (2008) which produces weather reports from structured data or Mitchell et al. (2013) which generates descriptions of objects from images.
Citation Sentence:
Our work is more similar to NLG work that concentrates on structural constraints such as generative poetry ( Greene et al. , 2010 ) ( Colton et al. , 2012 ) ( Jiang and Zhou , 2008 ) or song lyrics ( Wu et al. , 2013 ) ( Ramakrishnan A et al. , 2009 ) , where specified meter or rhyme schemes are enforced .
Context after the citation:
In these papers soft semantic goals are sometimes also introduced that seek responses to previous lines of poetry or lyric. Computational creativity is another subfield of NLG that often does not fix an a priori meaning in its output. Examples such as ¨Ozbal et al. (2013) and Valitutti et al. (2013) use template filling techniques guided by quantified notions of humor or how catchy a phrase is. Our motivation for generation of material for language education exists in work such as Sumita et al. (2005) and Mostow and Jang (2012), which deal with automatic generation of classic fill in the blank questions. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:519 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Manning (1993) attempts to improve on the approach of Brent (1993) by passing raw text through a stochastic tagger and a finite-state parser (which includes a set of simple rules for subcategorization frame recognition) in order to extract verbs and the constituents with which they co-occur. Ushioda et al. (1993) employ an additional statistical method based on log-linear models and Bayesâ theorem to filter the extra noise introduced by the parser and were the first to induce relative frequencies for the extracted frames. The experiment is limited by the fact that all prepositional phrases are treated as adjuncts.
Citation Sentence:
Manning ( 1993 ) attempts to improve on the approach of Brent ( 1993 ) by passing raw text through a stochastic tagger and a finite-state parser ( which includes a set of simple rules for subcategorization frame recognition ) in order to extract verbs and the constituents with which they co-occur .
Context after the citation:
He assumes 19 different subcategorization frame definitions, and the extracted frames include details of specific prepositions. The extracted frames are noisy as a result of parser errors and so are filtered using the binomial hypothesis theory (BHT), following Brent (1993). Applying his technique to approximately four million words of New York Times newswire, Manning acquired 4,900 verb-subcategorization frame pairs for 3,104 verbs, an average of 1.6 frames per verb. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:52 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Most DOP models, such as in Bod (1993), Goodman (1996), Bonnema et al. (1997), Sima'an (2000) and Collins & Duffy (2002), use a likelihood criterion in defining the best parse tree: they take (some notion of) the most likely (i.e. most probable) tree as a candidate for the best tree of a sentence.
Citation Sentence:
Most DOP models , such as in Bod ( 1993 ) , Goodman ( 1996 ) , Bonnema et al. ( 1997 ) , Sima'an ( 2000 ) and Collins & Duffy ( 2002 ) , use a likelihood criterion in defining the best parse tree : they take ( some notion of ) the most likely ( i.e. most probable ) tree as a candidate for the best tree of a sentence .
Context after the citation:
We will refer to these models as Likelihood-DOP models, but in this paper we will specifically mean by "Likelihood-DOP" the PCFG-reduction of Bod (2001) given in Section 2.2. In Bod (2000b), an alternative notion for the best parse tree was proposed based on a simplicity criterion: instead of producing the most probable tree, this model produced the tree generated by the shortest derivation with the fewest training subtrees. We will refer to this model as Simplicity-DOP. In case the shortest derivation is not unique, Bod (2000b) proposes to back off to a frequency ordering of the subtrees. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:520 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The best performance on the WSJ corpus was achieved by a combination of the SATZ system (Palmer and Hearst 1997) with the Alembic system (Aberdeen et al. 1995): a 0.5% error rate. State-of-theart machine learning and rule-based SBD systems achieve an error rate of 0.8â1.5% measured on the Brown corpus and the WSJ corpus. Row C of Table 4 summarizes the highest results known to us (for all three tasks) produced by automatic systems on the Brown corpus and the WSJ corpus.
Citation Sentence:
The best performance on the WSJ corpus was achieved by a combination of the SATZ system ( Palmer and Hearst 1997 ) with the Alembic system ( Aberdeen et al. 1995 ) : a 0.5 % error rate .
Context after the citation:
The best performance on the Brown corpus, a 0.2% error rate, was reported by Riley (1989), who trained a decision tree classifier on a 25-million-word corpus. In the disambiguation of capitalized words, the most widespread method is POS tagging, which achieves about a 3% error rate on the Brown corpus and a 5% error rate on the WSJ corpus, as reported in Mikheev (2000). We are not aware of any studies devoted to the identification of abbreviations with comprehensive evaluation on either the Brown corpus or the WSJ corpus. In row D of Table 4, we summarized our main results: the results obtained by the application of our SBD rule set, which uses the information provided by the DCA to capitalized word disambiguation applied together with lexical lookup (as described in Section 7.5), and the abbreviation-handling strategy, which included the guessing heuristics, the DCA, and the list of 270 abbreviations (as described in Section 6). | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:521 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
The extracted frames are noisy as a result of parser errors and so are filtered using the binomial hypothesis theory (BHT), following Brent (1993). frame definitions, and the extracted frames include details of specific prepositions. He assumes 19 different subcategorization
Citation Sentence:
The extracted frames are noisy as a result of parser errors and so are filtered using the binomial hypothesis theory ( BHT ) , following Brent ( 1993 ) .
Context after the citation:
Applying his technique to approximately four million words of New York Times newswire, Manning acquired 4,900 verb-subcategorization frame pairs for 3,104 verbs, an average of 1.6 frames per verb. Briscoe and Carroll (1997) predefine 163 verbal subcategorization frames, obtained by manually merging the classes exemplified in the COMLEX (MacLeod, Grishman, and Meyers 1994) and ANLT (Boguraev et al. 1987) dictionaries and adding around 30 frames found by manual inspection. The frames incorporate control information and details of specific prepositions. Briscoe and Carroll (1997) refine the BHT with a priori information about the probabilities of subcategorization frame membership and use it to filter the induced frames. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:522 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
This method allows the efficient retrieval of arbitrary length n-grams (Nagao and Mori, 94; Haruno et al., 96; Ikehaxa et al., 96; Shimohata et al., 1997; Russell, 1998). For sake of efficiency, we used the suffix array technique to get a compact representation of our training corpus. Our approach relies on distributional and frequency statistics computed on each sequence of words found in a training corpus.
Citation Sentence:
This method allows the efficient retrieval of arbitrary length n-grams ( Nagao and Mori , 94 ; Haruno et al. , 96 ; Ikehaxa et al. , 96 ; Shimohata et al. , 1997 ; Russell , 1998 ) .
Context after the citation:
The literature abounds in measures that can help to decide whether words that co-occur are linguistically significant or not. In this work, the strength of association of a sequence of words wr = w1, wn is computed by two measures: a likelihood-based one p(wr) (where is the likelihood ratio given in (Dunning, 93)) and an entropy-based one e(w) (Shimohata et al., 1997). Letting T stand for the training text and m a token: Intuitively, the first measurement accounts for the fact that parts of a sequence of words that should be considered as a whole should not appear often by themselves. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:523 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The current system learns finite state flowcharts whereas typical learning systems usually acquire coefficient values as in Minsky and Papert (1969), assertional statements as in Michalski (1980), or semantic nets as in Winston (1975). The VNLCE processor may be considered to be a learning system of the tradition described, for example, in Michalski et al. (1984). It self activates to bias recognition toward historically observed patterns but is not otherwise observable.
Citation Sentence:
The current system learns finite state flowcharts whereas typical learning systems usually acquire coefficient values as in Minsky and Papert ( 1969 ) , assertional statements as in Michalski ( 1980 ) , or semantic nets as in Winston ( 1975 ) .
Context after the citation:
That is, the current system learns procedures rather than data structures. There is some literature on procedure acquisition such as the LISP synthesis work described in Biermann et al. (1984) and the PROLOG synthesis method of Shapiro (1982). However, the latter methodologies have not been applied to dialogue acquisition. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:524 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
conclusion
Context before the citation:
Our most accurate product model achieves an F score of 92.5 without the use of discriminative reranking and comes close to the best known numbers on this test set (Zhang et al., 2009). Our most accurate single grammar achieves an F score of 91.6 on the WSJ test set, rivaling discriminative reranking approaches (Charniak and Johnson, 2005) and products of latent variable grammars (Petrov, 2010), despite being a single generative PCFG. Second, the diversity of the individual grammars controls the gains that can be obtained by combining multiple grammars into a product model.
Citation Sentence:
Our most accurate product model achieves an F score of 92.5 without the use of discriminative reranking and comes close to the best known numbers on this test set ( Zhang et al. , 2009 ) .
Context after the citation:
In future work, we plan to investigate additional methods for increasing the diversity of our selftrained models. One possibility would be to utilize more unlabeled data or to identify additional ways to bias the models. It would also be interesting to determine whether further increasing the accuracy of the model used for automatically labeling the unlabeled data can enhance performance even more. A simple but computationally expensive way to do this would be to parse the data with an SM7 product model. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:525 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
To demonstrate that this is possible we have implemented a system which constructs dictionary entries for the PATR-II system (Shieber, 1984 and references therein). The output of the transformation program can be used to derive entries which are appropriate for particular grammatical formalisms.
Citation Sentence:
To demonstrate that this is possible we have implemented a system which constructs dictionary entries for the PATR-II system ( Shieber , 1984 and references therein ) .
Context after the citation:
PATR-II was chosen because it has been reimplemented in Cambridge and was therefore, available; however, the task would be nearly identical if we were constructing entries for a system based on GPSG, FUG or LFG. We intend to use the LDOCE source in the same way to derive most of the lexicon for the general purpose, morphological and syntactic parser we are developing. The latter employs a grammatical formalism based on GPSG; the comparatively theory neutral lexical entries that we construct from LDOCE should translate straightforwardly into this framework as well. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:526 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Other works (Kasper et al., 1995; Becker and Lopez, 2000) convert HPSG grammars into LTAG grammars. The manual translation demanded considerable efforts from the translator, and obscures the equivalence between the original and obtained grammars. Thus the translation was manual and grammar dependent.
Citation Sentence:
Other works ( Kasper et al. , 1995 ; Becker and Lopez , 2000 ) convert HPSG grammars into LTAG grammars .
Context after the citation:
However, given the greater expressive power of HPSG, it is impossible to convert an arbitrary HPSG grammar into an LTAG grammar. Therefore, a conversion from HPSG into LTAG often requires some restrictions on the HPSG grammar to suppress its generative capacity. Thus, the conversion loses the equivalence of the grammars, and we cannot gain the above advantages. Section 2 reviews the source and the target grammar formalisms of the conversion algorithm. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:527 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
The work of Hearst (1996) demonstrates that faceted queries can be converted into simple filtering constraints to boost precision. PICO-based querying in information retrieval is merely an instance of faceted querying, which has been widely used by librarians since the introduction of automated retrieval systems (e.g., Meadow et al. 1989). Although originally developed as a tool to assist in query formulation, Booth (2000) pointed out that PICO frames can be employed to structure IR results for improving precision.
Citation Sentence:
The work of Hearst ( 1996 ) demonstrates that faceted queries can be converted into simple filtering constraints to boost precision .
Context after the citation:
The feasibility of automatically identifying outcome statements in secondary sources has been demonstrated by Niu and Hirst (2004). Their study also illustrates the importance of semantic classes and relations. However, extraction of outcome statements from secondary sources (meta-analyses, in this case) differs from extraction of outcomes from MEDLINE citations because secondary sources represent knowledge that has already been distilled by humans (which may limit its scope). Because secondary sources are often more consistently organized, it is possible to depend on certain surface cues for reliable extraction (which is not possible for MEDLINE abstracts in general). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:528 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Efficient hardware implementation is also possible via chip-level parallelism (Rote, 1985). (Mohri, 2002). 19Multiple edges from j to k are summed into a single edge.
Citation Sentence:
Efficient hardware implementation is also possible via chip-level parallelism ( Rote , 1985 ) .
Context after the citation:
⢠In many cases of interest, Ti is an acyclic graph.20 Then Tar anâs method computes w0j for each j in topologically sorted order, thereby finding ti in a linear number of â and â operations. For HMMs (footnote 11), Ti is the familiar trellis, and we would like this computation of ti to reduce to the forwardbackward algorithm (Baum, 1972). But notice that it has no backward pass. In place of pushing cumulative probabilities backward to the arcs, it pushes cumulative arcs (more generally, values in V ) forward to the probabilities. | FutureWork | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:529 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
From an IR view, a lot of specialized research has already been carried out for medical applications, with emphasis on the lexico-semantic aspects of dederivation and decomposition (Pacak et al., 1980; Norton and Pacak, 1983; Wolff, 1984; Wingert, 1985; Dujols et al., 1991; Baud et al., 1998). This is particularly true for the medical domain. When it comes to a broader scope of morphological analysis, including derivation and composition, even for the English language only restricted, domain-specific algorithms exist.
Citation Sentence:
From an IR view , a lot of specialized research has already been carried out for medical applications , with emphasis on the lexico-semantic aspects of dederivation and decomposition ( Pacak et al. , 1980 ; Norton and Pacak , 1983 ; Wolff , 1984 ; Wingert , 1985 ; Dujols et al. , 1991 ; Baud et al. , 1998 ) .
Context after the citation:
While one may argue that single-word compounds are quite rare in English (which is not the case in the medical domain either), this is certainly not true for German and other basically agglutinative languages known for excessive single-word nominal compounding. This problem becomes even more pressing for technical sublanguages, such as medical German (e.g., âBlut druck mess gerdtâ translates to âdevice for measuring blood pressureâ). The problem one faces from an IR point of view is that besides fairly standardized nominal compounds, which already form a regular part of the sublanguage proper, a myriad of ad hoc compounds are formed on the fly which cannot be anticipated when formulating a retrieval query though they appear in relevant documents. Hence, enumerating morphological variants in a semi-automatically generated lexicon, such as proposed for French (Zweigenbaum et al., 2001), turns out to be infeasible, at least for German and related languages. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:53 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
method
Context before the citation:
2We could just as easily use other symmetric "association" measures, such as 02 (Gale & Church, 1991) or the Dice coefficient (Smadja, 1992). However, n(u) = Ev n(u,v), which is not the same as the frequency of u, because each token of u can co-occur with several differentv's. If uk and vk are indeed mutual translations, then their tendency to 'The co-occurrence frequency of a word type pair is simply the number of times the pair co-occurs in the corpus.
Citation Sentence:
2We could just as easily use other symmetric `` association '' measures , such as 02 ( Gale & Church , 1991 ) or the Dice coefficient ( Smadja , 1992 ) .
Context after the citation:
co-occur is called a direct association. Now, suppose that uk and uk±i often co-occur within their language. Then vk and uk+i will also co-occur more often than expected by chance. The arrow connecting vk and uk±i in Figure 1 represents an indirect association, since the association between vk and uk±i arises only by virtue of the association between each of them and uk . | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:530 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Krahmer and Theune (2002) have argued that Dale and Reiterâs (1995) dichotomy between salient and nonsalient objects (where the objects in the domain are the salient ones) should be replaced by an account that takes degrees of salience into account: No object can be too unsalient to be referred to, as long as the right properties are available. 9.4.1 A New Perspective on Salience. As we shall see, this will allow us to simplify the structure of GRE algorithms, and it will explain why many definite descriptions that look as if they were distinguishing descriptions are actually ambiguous.
Citation Sentence:
Krahmer and Theune ( 2002 ) have argued that Dale and Reiter 's ( 1995 ) dichotomy between salient and nonsalient objects ( where the objects in the domain are the salient ones ) should be replaced by an account that takes degrees of salience into account : No object can be too unsalient to be referred to , as long as the right properties are available .
Context after the citation:
In effect, this proposal (which measured salience numerically) analyzes the black mouse as denoting the unique most salient object in the domain that is both black and a mouse. Now suppose we let GRE treat salience just like other gradable Attributes. Suppose there are ten mice, five of which are black, whose degrees of salience are 1, 1, 3, 4, and 5 (the last one being most salient), while the other objects in the domain (cats, white mice) all have a higher salience. Then our algorithm might generate this list of properties: | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:531 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
The acquisition of dialogue as implemented in VNLCE is reminiscent of the program synthesis methodology developed by Biermann and Krishnaswamy (1976) where program flowcharts were constructed from traces of their behaviors. A verb is just as likely to be mis-recognized or not recognized as is a noun, adjective, determiner, etc. Thus, an error in this work has no pattern but occurs probabilistically.
Citation Sentence:
The acquisition of dialogue as implemented in VNLCE is reminiscent of the program synthesis methodology developed by Biermann and Krishnaswamy ( 1976 ) where program flowcharts were constructed from traces of their behaviors .
Context after the citation:
However, the "flowcharts" in the current project are probabilistic in nature and the problems associated with matching incoming sentences to existing nodes has not been previously addressed. Another dialogue acquisition system has been developed by Ho (1984). However, that system has different goals: to enable the user to consciously design a dialogue to embody a particular human-machine interaction. The acquisition system described here is aimed at dealing with ill-formed input and is completely automatic and invisible to the user. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:532 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Character classes, such as punctuation, are defined according to the Unicode Standard (Aliprand et al., 2004). Each punctuation symbol is considered a separate token. We define a token and introduce whitespace boundaries between every span of one or more alphabetic or numeric characters.
Citation Sentence:
Character classes , such as punctuation , are defined according to the Unicode Standard ( Aliprand et al. , 2004 ) .
Context after the citation: | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:533 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Rather than producing a complete analysis of sentences, the alternative is to perform only partial analysis of the syntactic structures in a text (Harris, 1957; Abney, 1991; Greffenstette, 1993). Shallow parsing is studied as an alternative to full-sentence parsing.
Citation Sentence:
Rather than producing a complete analysis of sentences , the alternative is to perform only partial analysis of the syntactic structures in a text ( Harris , 1957 ; Abney , 1991 ; Greffenstette , 1993 ) .
Context after the citation:
A lot of recent work on shallow parsing has been influenced by Abneyâs work (Abney, 1991), who has suggested to âchunkâ sentences to base level phrases. For example, the sentence âHe reckons the current account deficit will narrow to only $ 1.8 billion in September .â would be chunked as follows (Tjong Kim Sang and Buchholz, 2000): [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP âThis research is supported by NSF grants IIS-9801638, ITR-IIS-0085836 and an ONR MURI Award. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:534 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Finally, it has been shown by Groesser (1981) that the ratio of derived to explicit information necessary for understanding a piece of text is about 8:1; furthermore, our reading of the analysis of five paragraphs by Crothers (1979) strongly suggests that only the most direct or obvious inferences are being made in the process of building a model or constructing a theory of a paragraph. Secondly, the cooperative principle of Grice (1975, 1978), under the assumption that referential levels of a writer and a reader are quite similar, implies that the writer should structure the text in a way that makes the construction of his intended model easy for the reader; and this seems to imply that he should appeal only to the most direct knowledge of the reader. First of all, iteration would increase the complexity of building a model of a paragraph; infinite iteration would almost certainly make impossible such a construction in real time.
Citation Sentence:
Finally , it has been shown by Groesser ( 1981 ) that the ratio of derived to explicit information necessary for understanding a piece of text is about 8:1 ; furthermore , our reading of the analysis of five paragraphs by Crothers ( 1979 ) strongly suggests that only the most direct or obvious inferences are being made in the process of building a model or constructing a theory of a paragraph .
Context after the citation:
Thus, for example, we can expect that in the worst case only one or two steps of such an iteration would be needed to find answers to wh-questions. Let P be a paragraph, let .15 = (s1,... , S) be its translation into a sequence of logical formulas. The set of all predicates appearing in X will be denoted by Pred(X). Definition | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:535 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
In other methods, lexical resources are specifically tailored to meet the requirements of the domain (Rosario and Hearst, 2001) or the system (Gomez, 1998). Some methods of semantic relation analysis rely on predefined templates filled with information from processed texts (Baker et al., 1998).
Citation Sentence:
In other methods , lexical resources are specifically tailored to meet the requirements of the domain ( Rosario and Hearst , 2001 ) or the system ( Gomez , 1998 ) .
Context after the citation:
Such systems extract information from some types of syntactic units (clauses in (Fillmore and Atkins, 1998; Gildea and Jurafsky, 2002; Hull and Gomez, 1996); noun phrases in (Hull and Gomez, 1996; Rosario et al., 2002)). Lists of semantic relations are designed to capture salient domain information. In the Rapid Knowledge Formation Project (RKF) a support system was developed for domain experts. It helps them build complex knowledge bases by combining components: events, entities and modifiers (Clark and Porter, 1997). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:536 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
The exact form of M(Si) need not be discussed at this point; it could be a conceptual dependence graph (Schank and Abelson 1977), a deep parse of Si, or some other representation. We denote the meaning of each sentence Si with the notation M(Si). The expectation parser will then use this information to improve its ability to recognize the next incoming sentence.
Citation Sentence:
The exact form of M ( Si ) need not be discussed at this point ; it could be a conceptual dependence graph ( Schank and Abelson 1977 ) , a deep parse of Si , or some other representation .
Context after the citation:
A user behavior is represented by a network, or directed graph, of such meanings. At the beginning of a task, the state of the interaction is represented by the start state of the graph. The immediate successors of this state are the typical opening meaning structures for this user, and succeeding states represent, historically, paths that have been followed by this user. It is important that if two sentences, Si and Sj, have approximately the same meaning this should be clear in the representations M(Si) and M(Sj). | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:537 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Some methods are based on likelihood (Och and Ney, 2002; Blunsom et al., 2008), error rate (Och, 2003; Zhao and Chen, 2009; Pauls et al., 2009; Galley and Quirk, 2011), margin (Watanabe et al., 2007; Chiang et al., 2008) and ranking (Hopkins and May, 2011), and among which minimum error rate training (MERT) (Och, 2003) is the most popular one. h is a feature vector which is scaled by a weight W. Parameter estimation is one of the most important components in SMT, and various training methods have been proposed to tune W. where f and e (e') are source and target sentences, respectively.
Citation Sentence:
Some methods are based on likelihood ( Och and Ney , 2002 ; Blunsom et al. , 2008 ) , error rate ( Och , 2003 ; Zhao and Chen , 2009 ; Pauls et al. , 2009 ; Galley and Quirk , 2011 ) , margin ( Watanabe et al. , 2007 ; Chiang et al. , 2008 ) and ranking ( Hopkins and May , 2011 ) , and among which minimum error rate training ( MERT ) ( Och , 2003 ) is the most popular one .
Context after the citation:
All these training methods follow the same pipeline: they train only a single weight on a given development set, and then use it to translate all the sentences in a test set. We call them a global training method. One of its advantages is that it allows us to train a single weight offline and thereby it is efficient. However, due to the diversity and uneven distribution of source sentences(Li et al., 2010), there are some shortcomings in this pipeline. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:538 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Similarly, (Barzilay and Lee, 2003) and (Shinyanma et al., 2002) learn sentence level paraphrase templates from a corpus of news articles stemming from different news source. For instance, (Lin and Pantel, 2001) acquire two-argument templates (inference rules) from corpora using an extended version of the distributional analysis in which paths in dependency trees that have similar arguments are taken to be close in meaning. Because of the large, open domain corpora these systems deal with, coverage and robustness are key issues and much on the work on paraphrases in that domain is based on automatic learning techniques.
Citation Sentence:
Similarly , ( Barzilay and Lee , 2003 ) and ( Shinyanma et al. , 2002 ) learn sentence level paraphrase templates from a corpus of news articles stemming from different news source .
Context after the citation:
And (Glickman and Dagan, 2003) use clustering and similarity measures to identify similar contexts in a single corpus and extract verbal paraphrases from these contexts. Such machine learning approaches have known pros and cons. On the one hand, they produce large scale resources at little man labour cost. On the other hand, the degree of descriptive abstraction offered by the list of inference or paraphrase rules they output is low. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:539 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Rather than producing a complete analysis of sentences, the alternative is to perform only partial analysis of the syntactic structures in a text (Harris, 1957; Abney, 1991; Greffenstette, 1993). Shallow parsing is studied as an alternative to full-sentence parsing.
Citation Sentence:
Rather than producing a complete analysis of sentences , the alternative is to perform only partial analysis of the syntactic structures in a text ( Harris , 1957 ; Abney , 1991 ; Greffenstette , 1993 ) .
Context after the citation:
A lot of recent work on shallow parsing has been influenced by Abneyâs work (Abney, 1991), who has suggested to âchunkâ sentences to base level phrases. For example, the sentence âHe reckons the current account deficit will narrow to only $ 1.8 billion in September .â would be chunked as follows (Tjong Kim Sang and Buchholz, 2000): [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP âThis research is supported by NSF grants IIS-9801638, ITR-IIS-0085836 and an ONR MURI Award. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:54 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
This setup is also scalable to a higher number of word pairs (350) as was shown in Gurevych (2006). She used an adapted experimental setup where test subjects had to assign discrete values {0,1,2,3,4} and word pairs were presented in isolation. Gurevych (2005) replicated the experiment of Rubenstein and Goodenough with the original 65 word pairs translated into German.
Citation Sentence:
This setup is also scalable to a higher number of word pairs ( 350 ) as was shown in Gurevych ( 2006 ) .
Context after the citation:
Finkelstein et al. (2002) annotated a larger set of word pairs (353), too. They used a 0-10 range of relatedness scores, but did not give further details about their experimental setup. In psycholinguistics, relatedness of words can also be determined through association tests (Schulte im Walde and Melinger, 2005). Results of such experiments are hard to quantify and cannot easily serve as the basis for evaluating SR measures. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:540 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
More recent work on terminology structuring has focussed on formal similarity to develop hypotheses on the semantic relationships between terms: Daille (2003) uses derivational morphology; Grabar and Zweigenbaum (2002) use, as a starting point, a number of identical characters. Hence, synonyms, co-hyponyms, hyperonyms, etc. are not differentiated. This approach, which uses words that appear in the context of terms to formulate hypotheses on their semantic relatedness (Habert et al., 1996, for example), does not specify the relationship itself.
Citation Sentence:
More recent work on terminology structuring has focussed on formal similarity to develop hypotheses on the semantic relationships between terms : Daille ( 2003 ) uses derivational morphology ; Grabar and Zweigenbaum ( 2002 ) use , as a starting point , a number of identical characters .
Context after the citation:
Up to now, the focus has been on nouns and adjectives, since these structuring methods have been applied to lists of extracted candidate terms (Habert et al., 1996; Daille, 2003) or to lists of admitted terms (Grabar and Zweigenbaum, 2002). As a consequence, relationships considered have been mostly synonymic or taxonomic, or defined as term variations. On the other hand, other work has been carried out in order to acquire collocations. Most of these endeavours have focused on purely statistical acquisition techniques (Church and Hanks, 'However, our interpretation of LFs in this work is much looser, since we admitted verbs that would not be considered to be members of true collocations as Mel'cuk et al. (1984 1999) define them, i.e. groups of lexical units that share a restricted cooccurrence relationship. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:541 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Louwerse et al. (2006) and Louwerse et al. (2007) study the relation between eye gaze, facial expression, pauses and dialogue structure in annotated English map-task dialogues (Anderson et al., 1991) and find correlations between the various modalities both within and across speakers. Sridhar et al. (2009) obtain promising results in dialogue act tagging of the Switchboard-DAMSL corpus using lexical, syntactic and prosodic cues, while Gravano and Hirschberg (2009) examine the relation between particular acoustic and prosodic turn-yielding cues and turn taking in a large corpus of task-oriented dialogues. Work has also been done on prosody and gestures in the specific domain of map-task dialogues, also targeted in this paper.
Citation Sentence:
Louwerse et al. ( 2006 ) and Louwerse et al. ( 2007 ) study the relation between eye gaze , facial expression , pauses and dialogue structure in annotated English map-task dialogues ( Anderson et al. , 1991 ) and find correlations between the various modalities both within and across speakers .
Context after the citation:
Finally, feedback expressions (head nods and shakes) are successfully predicted from speech, prosody and eye gaze in interaction with Embodied Communication Agents as well as human communication (Fujie et al., 2004; Morency et al., 2005; Morency et al., 2007; Morency et al., 2009). Our work is in line with these studies, all of which focus on the relation between linguistic expressions, prosody, dialogue content and gestures. In this paper, we investigate how feedback expressions can be classified into different dialogue act categories based on prosodic and gesture features. Our data are made up by a collection of eight video-recorded map-task dialogues in Danish, which were annotated with phonetic and prosodic information. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:542 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Some examples include text categorization (Lewis and Catlett 1994), base noun phrase chunking (Ngai and Yarowsky 2000), part-of-speech tagging (Engelson Dagan 1996), spelling confusion set disambiguation (Banko and Brill 2001), and word sense disambiguation (Fujii et al. 1998). applications. In addition to PP-attachment, as discussed in this article, sample selection has been successfully applied to other classification
Citation Sentence:
Some examples include text categorization ( Lewis and Catlett 1994 ) , base noun phrase chunking ( Ngai and Yarowsky 2000 ) , part-of-speech tagging ( Engelson Dagan 1996 ) , spelling confusion set disambiguation ( Banko and Brill 2001 ) , and word sense disambiguation ( Fujii et al. 1998 ) .
Context after the citation:
More challenging are learning problems whose objective is not classification, but generation of complex structures. One example in this direction is applying sample selection to semantic parsing (Thompson, Califf, and Mooney 1999), in which sentences are paired with their semantic representation using a deterministic shift-reduce parser. A recent effort that focuses on statistical syntactic parsing is the work by Tang, Lou, and Roukos (2002). Their results suggest that the number of training examples can be further reduced by using a hybrid evaluation function that combines a hypothesisperformance-based metric such as tree entropy (âword entropyâ in their terminology) with a problem-space-based metric such as sentence clusters. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:543 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
It is known that certain cue words and phrases (Hirschberg and Litman 1993) can serve as explicit indicators of discourse structure. DA labeling accuracy results should be compared to a baseline (chance) accuracy of 35%, the relative frequency of the most frequent DA type (STATEMENT) in our test set.4 5.1 Dialogue Act Classification Using Words DA classification using words is based on the observation that different DAs use distinctive word strings. Finally, we present results for a combination of all knowledge sources.
Citation Sentence:
It is known that certain cue words and phrases ( Hirschberg and Litman 1993 ) can serve as explicit indicators of discourse structure .
Context after the citation:
Similarly, we find distinctive correlations between certain phrases and DA types. For example, 92.4% of the uh-huh's occur in BACKCHANNELS, and 88.4% of the trigrams "<start> do you" occur in YES-NO-QUESTIONS. To leverage this information source, without hand-coding knowledge about which words are indicative of which DAs, we will use statistical language models that model the full word sequences associated with each DA type. 5.1.1 Classification from True Words. | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:544 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
The last point may be seen better if we look at some differences between our system and KRYPTON, which also distinguishes between an object theory and background knowledge (cfXXX Brachman et al. 1985). Rather, R must be treated as a separate logical level for these syntactic reasons, and because of its functionâbeing a pool of possibly conflicting semantic constraints. Even if all background knowledge were described, as in our examples, by sets of first order theories, because of the preferences and inconsistencies of meanings, we could not treat R as a flat database of factsâsuch a model simply would not be realistic.
Citation Sentence:
The last point may be seen better if we look at some differences between our system and KRYPTON , which also distinguishes between an object theory and background knowledge ( cfXXX Brachman et al. 1985 ) .
Context after the citation:
KRYPTON's A-box, encoding the object theory as a set of assertions, uses standard first order logic; the T-box contains information expressed in a frame-based language equivalent to a fragment of FOL. However, the distinction between the two parts is purely functionalâthat is, characterized in terms of the system's behavior. From the logical point of view, the knowledge base is the union of the two boxes, i.e. a theory, and the entailment is standard. In our system, we also distinguish between the "definitional" and factual information, but the "definitional" part contains collections of mutually excluding theories, not just of formulas describing a semantic network. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:545 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
One approach to this problem is that taken by the ASCOT project (Akkerman et al., 1985; Akkerman, 1986). This type of error and inconsistency arises because grammatical codes are constructed by hand and no automatic checking procedure is attempted (see Michiels, 1982, for further comment). Presumably this kind of inconsistency arose because one member of the team of lexicographers realised that this form of elision saved more space.
Citation Sentence:
One approach to this problem is that taken by the ASCOT project ( Akkerman et al. , 1985 ; Akkerman , 1986 ) .
Context after the citation:
In this project, a new lexicon is being manually derived from LDOCE. The coding system for the new lexicon is a slightly modified and simplified version of the LDOCE scheme, without any loss of generalisation and expressive power. More importantly, the assignment of codes for problematic or erroneously labelled words is being corrected in an attempt to make the resulting lexicon more appropriate for automated analysis. In the medium term this approach, though time consuming, will be of some utility for producing more reliable lexicons for natural language processing. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:546 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
Though we could have used a further downstream measure like BLEU, METEOR has also been shown to directly correlate with translation quality (Banerjee and Lavie, 2005) and is simpler to measure. We use a reordering score based on the reordering penalty from the METEOR scoring metric. In our experiments we work with a set of EnglishJapanese reordering rules1 and gold reorderings based on human generated correct reordering of an aligned target sentences.
Citation Sentence:
Though we could have used a further downstream measure like BLEU , METEOR has also been shown to directly correlate with translation quality ( Banerjee and Lavie , 2005 ) and is simpler to measure .
Context after the citation:
All reordering augmented-loss experiments are run with the same treebank data as the baseline (the training portions of PTB, Brown, and QTB). The extrinsic reordering training data consists of 10930 examples of English sentences and their correct Japanese word-order. We evaluate our results on an evaluation set of 6338 examples of similarly created reordering data. The reordering cost, evaluation | Motivation | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:547 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
For all experiments reported in this section we used the syntactic dependency parser MaltParser v1.3 (Nivre 2003, 2008; Kübler, McDonald, and Nivre 2009), a transition-based parser with an input buffer and a stack, which uses SVM classifiers Statistics about this split (after conversion to the CATiB dependency format) are given in Table 1. For all experiments, unless specified otherwise, we used the dev set.10 We kept the test unseen (âblindâ) during training and model development.
Citation Sentence:
For all experiments reported in this section we used the syntactic dependency parser MaltParser v1 .3 ( Nivre 2003 , 2008 ; Kübler , McDonald , and Nivre 2009 ) , a transition-based parser with an input buffer and a stack , which uses SVM classifiers
Context after the citation:
10 We use the term âdev setâ to denote a non-blind test set, used for model development (feature selection and feature engineering). We do not perform further weight optimization (which, if done, is done on a separate âtuning setâ). to predict the next state in the parse derivation. All experiments were done using the Nivre âeagerâ algorithm.11 There are five default attributes in the MaltParser terminology for each token in the text: word ID (ordinal position in the sentence), word-form, POS tag, head (parent word ID), and deprel (the dependency relation between the current word and its parent). | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:548 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
We conducted experiments with gold features to assess the potential of these features, and with predicted features, obtained from training a simple maximum likelihood estimation classifier on this resource (Alkuhlani and Habash 2012).19 The first part of Table 8 shows that the RAT (rationality) feature is very relevant (in gold), but suffers from low accuracy (no gains in machine-predicted input). To address this issue, we use a version of the PATB3 training and dev sets manually annotated with functional gender, number, and rationality (Alkuhlani and Habash 2011).18 This is the first resource providing all three features (ElixirFm only provides functional number, and to some extent functional gender). The ElixirFM lexical resource used previously provided functional NUMBER feature values but no functional GENDER values, nor RAT (rationality, or humanness) values.
Citation Sentence:
We conducted experiments with gold features to assess the potential of these features , and with predicted features , obtained from training a simple maximum likelihood estimation classifier on this resource ( Alkuhlani and Habash 2012 ) .19 The first part of Table 8 shows that the RAT ( rationality ) feature is very relevant ( in gold ) , but suffers from low accuracy ( no gains in machine-predicted input ) .
Context after the citation:
The next two parts show the advantages of functional gender and number (denoted with a FN* prefix) over their surface-based counterparts. The fourth part of the table shows the combination of these functional features with the other features that participated in the best combination so far (LMM, the extended DET2, and PERSON); without RAT, this combination is at least as useful as its form-based counterpart, in both gold and predicted input; adding RAT to this combination yields 0.4% (absolute) gain in gold, offering further support to the relevance of the rationality feature, but a slight decrease in predicted input, presumably due to insufficient accuracy again. The last part of the table revalidates the gains achieved with the best controlled feature combination, using CATIBEXâthe best performing tag set with predicted input. Note, however, that the 1% (absolute) advantage of CATIBEX (without additional features) over the morphology-free CORE12 on machine-predicted input (Table 2) has | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:549 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Thus, over the past few years, along with advances in the use of learning and statistical methods for acquisition of full parsers (Collins, 1997; Charniak, 1997a; Charniak, 1997b; Ratnaparkhi, 1997), significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns syntactic phrases or words that participate in a syntactic relationship (Church, 1988; Ramshaw and Marcus, 1995; Argamon et al., 1998; Cardie and Pierce, 1998; Munoz et al., 1999; Punyakanok and Roth, 2001; Buchholz et al., 1999; Tjong Kim Sang and Buchholz, 2000). While earlier work in this direction concentrated on manual construction of rules, most of the recent work has been motivated by the observation that shallow syntactic information can be extracted using local information by examining the pattern itself, its nearby context and the local part-of-speech information. to ] [NP only $ 1.8 billion ] [PP in ] [NP September] .
Citation Sentence:
Thus , over the past few years , along with advances in the use of learning and statistical methods for acquisition of full parsers ( Collins , 1997 ; Charniak , 1997a ; Charniak , 1997b ; Ratnaparkhi , 1997 ) , significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns syntactic phrases or words that participate in a syntactic relationship ( Church , 1988 ; Ramshaw and Marcus , 1995 ; Argamon et al. , 1998 ; Cardie and Pierce , 1998 ; Munoz et al. , 1999 ; Punyakanok and Roth , 2001 ; Buchholz et al. , 1999 ; Tjong Kim Sang and Buchholz , 2000 ) .
Context after the citation:
Research on shallow parsing was inspired by psycholinguistics arguments (Gee and Grosjean, 1983) that suggest that in many scenarios (e.g., conversational) full parsing is not a realistic strategy for sentence processing and analysis, and was further motivated by several arguments from a natural language engineering viewpoint. First, it has been noted that in many natural language applications it is sufficient to use shallow parsing information; information such as noun phrases (NPs) and other syntactic sequences have been found useful in many large-scale language processing applications including information extraction and text summarization (Grishman, 1995; Appelt et al., 1993). Second, while training a full parser requires a collection of fully parsed sentences as training corpus, it is possible to train a shallow parser incrementally. If all that is available is a collection of sentences annotated for NPs, it can be used to produce this level of analysis. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:55 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
In our previous work (Salloum and Habash, 2011; Salloum and Habash, 2012), we applied our approach to tokenized Arabic and our DA-MSA transfer component used feature transfer rules only. In contrast, we use hand-written morphosyntactic transfer rules that focus on translating DA morphemes and lemmas to their MSA equivalents. Similarly, we use some character normalization rules, a DA morphological analyzer, and DA-MSA dictionaries.
Citation Sentence:
In our previous work ( Salloum and Habash , 2011 ; Salloum and Habash , 2012 ) , we applied our approach to tokenized Arabic and our DA-MSA transfer component used feature transfer rules only .
Context after the citation:
We did not use a language model to pick the best path; instead we kept the ambiguity in the lattice and passed it to our SMT system. In contrast, in this paper, we run ELISSA on untokenized Arabic, we use feature, lemma, and surface form transfer rules, and we pick the best path of the generated MSA lattice through a language model. Certain aspects of our approach are similar to Riesa and Yarowsky (2006)âs, in that we use morphological analysis for DA to help DA-English MT; but unlike them, we use a rule-based approach to model DA morphology. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:550 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
It has already been used to implement a framework for teaching NLP (Loper and Bird, 2002). Python is very easy to learn, read and write, and allows commands to be entered interactively into the interpreter, making it ideal for experimentation. Python has a number of advantages over other options, such as Java and Perl.
Citation Sentence:
It has already been used to implement a framework for teaching NLP ( Loper and Bird , 2002 ) .
Context after the citation:
Using the Boost.Python C++ library (Abrahams, 2003), it is possible to reflect most of the components directly into Python with a minimal amount of coding. The Boost.Python library also allows the C++ code to access new classes written in Python that are derived from the C++ classes. This means that new and extended components can be written in Python (although they will be considerably slower). The Python interface allows the components to be dynamically composed, configured and extended in any operating system environment without the need for a compiler. | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:551 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Due to their remarkable ability to incorporate context structure information and long distance reordering into the translation process, tree-based translation models have shown promising progress in improving translation quality (Liu et al., 2006, 2009; Quirk et al., 2005; Galley et al., 2004, 2006; Marcu et al., 2006; Shen et al., 2008; Zhang et al., 2011b). In recent years, tree-based translation models1 are drawing more and more attention in the community of statistical machine translation (SMT).
Citation Sentence:
Due to their remarkable ability to incorporate context structure information and long distance reordering into the translation process , tree-based translation models have shown promising progress in improving translation quality ( Liu et al. , 2006 , 2009 ; Quirk et al. , 2005 ; Galley et al. , 2004 , 2006 ; Marcu et al. , 2006 ; Shen et al. , 2008 ; Zhang et al. , 2011b ) .
Context after the citation:
However, tree-based translation models always suffer from two major challenges: 1) They are usually built directly from parse trees, which are generated by supervised linguistic parsers. 1 A tree-based translation model is defined as a model using tree structures on one side or both sides. However, for many language pairs, it is difficult to acquire such corresponding linguistic parsers due to the lack of Tree-bank resources for training. 2) Parse trees are actually only used to model and explain the monolingual structure, rather than the bilingual mapping between language pairs. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:552 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Liu et al. (2005), Meral et al. (2007), Murphy (2001), Murphy and Vogel (2007) and Topkara et al. (2006a) all belong to the syntactic transformation category. In other words, instead of performing lexical substitution directly to the text, the secret message is embedded into syntactic parse trees of the sentences. Later, Atallah et al. (2001b) embedded information in the tree structure of the text by adjusting the structural properties of intermediate representations of sentences.
Citation Sentence:
Liu et al. ( 2005 ) , Meral et al. ( 2007 ) , Murphy ( 2001 ) , Murphy and Vogel ( 2007 ) and Topkara et al. ( 2006a ) all belong to the syntactic transformation category .
Context after the citation:
After embedding the secret message, modified deep structure forms are converted into the surface structure format via language generation tools. Atallah et al. (2001b) and Topkara et al. (2006a) attained the embedding capacity of 0.5 bits per sentence with the syntactic transformation method. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:553 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
introduction
Context before the citation:
Similarly, the notion of R + M-abduction is spiritually related to the "abductive inference" of Reggia (1985), the "diagnosis from first principles" of Reiter (1987), "explainability" of Poole (1988), and the subset principle of Berwick (1986). , "domain circumscription" (cfXXX Etherington and Mercer 1987), and their kin. Since techniques developed elsewhere may prove useful, at least for comparison, it is worth mentioning at this point that the proposed metarules are distant cousins of "unique-name assumption" (Genesereth and Nilsson 1987), "domain closure assumption" (ibid.)
Citation Sentence:
Similarly , the notion of R + M-abduction is spiritually related to the `` abductive inference '' of Reggia ( 1985 ) , the `` diagnosis from first principles '' of Reiter ( 1987 ) , `` explainability '' of Poole ( 1988 ) , and the subset principle of Berwick ( 1986 ) .
Context after the citation:
But, obviously, trying to establish precise connections for the metarules or the provability and the R + M-abduction would go much beyond the scope of an argument for the correspondence of paragraphs and models. These connections are being examined elsewhere (Zadrozny forthcoming). | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:554 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
On the other side, wikis started as collective works where each entry is not owned by a single author e.g. Wikipedia (2005). Now they try to collect themselves in so-called âblogspheresâ. So blogs are a literary metagenre which started as authored personal diaries or journals.
Citation Sentence:
On the other side , wikis started as collective works where each entry is not owned by a single author e.g. Wikipedia ( 2005 ) .
Context after the citation:
Now personal wiki tools are arising for brainstorming and mind mapping. See Section 4 for further aspects. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:555 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
To address this inconsistency in the correspondence between inflectional features and morphemes, and inspired by Smrž (2007), we distinguish between two types of inflectional features: formbased (a.k.a. surface, or illusory) features and functional features.6 Most available Arabic NLP tools and resources model morphology using formbased (âsurfaceâ) inflectional features, and do not mark rationality; this includes the Penn Arabic Treebank (PATB) (Maamouri et al. 2004), the Buckwalter morphological analyzer (Buckwalter 2004), and tools using them such as the Morphological Analysis and Disambiguation for Arabic (MADA) toolkit (Habash and Rambow 2005; Habash, Rambow, and Roth 2012). A similar inconsistency appears in feminine nominals that are not inflected using sound gender suffixes, for example, the feminine form of the masculine singular adjective S, ��l Ãzraq+Ï (âblueâ) is Abj zarqAâ+Ï not U, j �l* *Ãzraq+ah. This irregular inflection, known as broken plural, is similar to the English mouse/mice, but is much more common in Arabic (over 50% of plurals in our training data).
Citation Sentence:
To address this inconsistency in the correspondence between inflectional features and morphemes , and inspired by Smrž ( 2007 ) , we distinguish between two types of inflectional features : formbased ( a.k.a. surface , or illusory ) features and functional features .6 Most available Arabic NLP tools and resources model morphology using formbased ( `` surface '' ) inflectional features , and do not mark rationality ; this includes the Penn Arabic Treebank ( PATB ) ( Maamouri et al. 2004 ) , the Buckwalter morphological analyzer ( Buckwalter 2004 ) , and tools using them such as the Morphological Analysis and Disambiguation for Arabic ( MADA ) toolkit ( Habash and Rambow 2005 ; Habash , Rambow , and Roth 2012 ) .
Context after the citation:
The Elixir-FM analyzer (Smrž 2007) readily provides the 4 PATB-tokenized words; see Section 2.5. 5 We ignore duals, which are regular in Arabic, and case/state variations in this discussion for simplicity. 6 Note that the functional and form-based feature values for verbs always coincide. | CompareOrContrast | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:556 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
None
Context before the citation:
Jiang et al. (2005) has built a semantic role classifier exploiting the interdependence of semantic roles.
Citation Sentence:
Jiang et al. ( 2005 ) has built a semantic role classifier exploiting the interdependence of semantic roles .
Context after the citation:
It has turned the single point classification problem into the sequence labeling problem with the introduction of semantic context features. Semantic context features indicates the features extracted from the arguments around the current one. We can use window size to represent the scope of the context. Window size [-m, n] means that, in the sequence that all the arguments has constructed, the features of previous m and following n arguments will be utilized for the classification of current semantic role. | Uses | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:557 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
related work
Context before the citation:
Others have applied the NLP technologies of near-duplicate detection and topic-based text categorization to politically oriented text (Yang and Callan, 2005; Purpura and Hillard, 2006). An exception is Grefenstette et al. (2004), who experimented with determining the political orientation of websites essentially by classifying the concatenation of all the documents found on that site. There has also been work focused upon determining the political leaning (e.g., âliberalâ vs. âconservativeâ) of a document or author, where most previously-proposed methods make no direct use of relationships between the documents to be classified (the âunlabeledâ texts) (Laver et al., 2003; Efron, 2004; Mullen and Malouf, 2006).
Citation Sentence:
Others have applied the NLP technologies of near-duplicate detection and topic-based text categorization to politically oriented text ( Yang and Callan , 2005 ; Purpura and Hillard , 2006 ) .
Context after the citation:
Detecting agreement We used a simple method to learn to identify cross-speaker references indicating agreement. More sophisticated approaches have been proposed (Hillard et al., 2003), including an extension that, in an interesting reversal of our problem, makes use of sentimentpolarity indicators within speech segments (Galley et al., 2004). Also relevant is work on the general problems of dialog-act tagging (Stolcke et al., 2000), citation analysis (Lehnert et al., 1990), and computational rhetorical analysis (Marcu, 2000; Teufel and Moens, 2002). We currently do not have an efficient means to encode disagreement information as hard constraints; we plan to investigate incorporating such information in future work. | Background | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:558 |
You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:
- Background: provides context or foundational information related to the topic.
- Extends: builds upon the cited work.
- Uses: applies the methods or findings of the cited work.
- Motivation: cites the work as inspiration or rationale for the research.
- CompareOrContrast: compares or contrasts the cited work with others.
- FutureWork: cites the work as a direction for future research.
Your answer should be a single word from the following list of options: ["Background", "Extends", "Uses", "Motivation", "CompareOrContrast", "FutureWork"]. Do not include any other text in your response.
Section Title:
experiments
Context before the citation:
8 It is based on the dataset of Pang and Lee (2004),9 which consists of 1000 positive and 1000 negative movie reviews, tokenized and divided into 10 folds (F0âF9). In Zaidan et al. (2007), we introduced the âMovie Review Polarity Dataset Enriched with Annotator Rationales.â
Citation Sentence:
8 It is based on the dataset of Pang and Lee ( 2004 ) ,9 which consists of 1000 positive and 1000 negative movie reviews , tokenized and divided into 10 folds ( F0 -- F9 ) .
Context after the citation:
All our experiments use F9 as their final blind test set. The enriched dataset adds rationale annotations produced by an annotator A0, who annotated folds F0âF8 of the movie review set with rationales (in the form of textual substrings) that supported the goldstandard classifications. We will use A0âs data to determine the improvement of our method over a (log-linear) baseline model without rationales. We also use A0 to compare against the âmasking SVMâ method and SVM baseline of Zaidan et al. (2007). | Extends | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "label",
"source_type": "single_source",
"task_family": "classification"
} | acl_arc_intent_classification:train:559 |