sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
for all methods , the tweets were tokenized with the cmu twitter nlp tool . | the tweets were tokenized and part-ofspeech tagged with the cmu ark twitter nlp tool and stanford corenlp . |
it was shown by nederhof et al that prefix probabilities can also be effectively computed for probabilistic tree adjoining grammars . | nederhof et al , for instance , show that prefix probabilities , and therefore surprisal , can be estimated from tree adjoining grammars . |
first , kikuchi et al proposed a new long short-term memory network to control the length of the sentence generated by an encoder-decoder model in a text summarization task . | first , kikuchi et al tried to control the length of the sentence generated by an encoder-decoder model in a text summarization task . |
with word confusion networks further improves performance . | the complexity is dominated by the word confusion network construction and parsing . |
fofe can model the word order in a sequence based on a simple ordinally-forgetting mechanism , which uses the position of each word in the sequence . | fofe can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words . |
we ’ ve demonstrated that the benefits of unsupervised multilingual learning increase steadily with the number of available languages . | we found that performance improves steadily as the number of available languages increases . |
dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words . | dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification . |
for each task , we provide separate training , development , and test datasets for english , arabic , and spanish tweets . | for each task , we provided training , development , and test datasets for english , arabic , and spanish tweets . |
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit . | a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data . |
c . ~ = { ( subj , 0 ) , < n , 0 ) , < v , 0 ) , < comp , 0 ) , ( bar , 0 ) , and a type 1feature successor to the feature system and . . . < agr , 1 ) , < slash , 1 ) } . | we add a type 0 feature 0e ( with p ( 0e ) = { 0 } ) c. ~= { ( subj , 0 ) , < n , 0 ) , < v , 0 ) , < comp,0 ) , ( bar , 0 ) , and a type 1feature successor to the feature system and ... < agr , 1 ) , < slash , 1 ) } use this to build the set of indices . |
shared task is a new approach to time normalization based on the semantically compositional annotation of time expressions . | the parsing time normalization task is the first effort to extend time normalization to richer and more complex time expressions . |
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus . | we use the word2vec cbow model with a window size of 5 and a minimum frequency of 5 to generate 200-dimensional vectors . |
syntactic language models can become intolerantly slow to train . | in contrast , syntactic language models can be much slower to train due to rich features . |
the learning rule was adam with default tensorflow parameters . | the learning rule was adam with standard parameters . |
we embed all words and characters into low-dimensional real-value vectors which can be learned by language model . | we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus . |
semantic knowledge ( e . g . word-senses ) has been defined at the ibm scientific center . | semantic knowledge is represented in a very detailed form ( word_sense pragmatics ) . |
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . | we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . |
to obtain this , we used mcut proposed by ding et al which is a type of spectral clustering . | to obtain this , we perform min-max cut proposed by ding et al , which is a spectral clustering method . |
part-of-speech tagging is a crucial preliminary process in many natural language processing applications . | part-of-speech tagging is a key process for various tasks such as ` information extraction , text-to-speech synthesis , word sense disambiguation and machine translation . |
information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re ) . | information extraction ( ie ) is the process of finding relevant entities and their relationships within textual documents . |
all systems are evaluated using case-insensitive bleu . | we adopted the case-insensitive bleu-4 as the evaluation metric . |
automatic image captioning is a fundamental task that couples visual and linguistic learning . | automatic image captioning is a much studied topic in both the natural language processing ( nlp ) and computer vision ( cv ) areas of research . |
in particular , the recent shared tasks of conll 2008 tackled joint parsing of syntactic and semantic dependencies . | the recent conll shared tasks have been focusing on semantic dependency parsing along with the traditional syntactic dependency parsing . |
conditional random fields are undirected graphical models trained to maximize the conditional probability of the desired outputs given the corresponding inputs . | conditional random fields are discriminatively-trained undirected graphical models that find the globally optimal labeling for a given configuration of random variables . |
additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit . | a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data . |
johnson and charniak proposed a tag-based noisy channel model , which showed great improvement over a boosting-based classifier . | johnson and charniak , 2004 ) proposed a tag-based noisy channel model for disfluency detection . |
this is also in line with what has been previously observed in that a person may express the same stance towards a target by using negative or positive language . | as previously reported in , a person may express the same stance towards a target by using negative or positive language . |
relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence . | relation extraction is a fundamental task in information extraction . |
semantic difference is a ternary relation between two concepts ( apple , banana ) and a discriminative attribute ( red ) that characterizes the first concept but not the other . | semantic difference is a ternary relation between two concepts ( apple , banana ) and a discriminative feature ( red ) that characterizes the first concept but not the other . |
ding and palmer propose a syntax-based translation model based on a probabilistic synchronous dependency insert grammar , a version of synchronous grammars defined on dependency trees . | ding and palmer introduce the notion of a synchronous dependency insertion grammar as a tree substitution grammar defined on dependency trees . |
sentiment classification is a very domain-specific problem ; training a classifier using the data from one domain may fail when testing against data from another . | sentiment classification is a task to predict a sentiment label , such as positive/negative , for a given text and has been applied to many domains such as movie/product reviews , customer surveys , news comments , and social media . |
bansal et al show the benefits of such modified-context embeddings in dependency parsing task . | bansal et al show that deps context is preferable to linear context on parsing task . |
they have been useful as features in many nlp tasks . | others have found them useful in parsing and other tasks . |
for example , faruqui and dyer use canonical component analysis to align the two embedding spaces . | more concretely , faruqui and dyer use canonical correlation analysis to project the word embeddings in both languages to a shared vector space . |
the log-lineal combination weights were optimized using mert . | the minimum error rate training was used to tune the feature weights . |
we train a secondorder crf model using marmot , an efficient higher-order crf implementation . | we model the sequence of morphological tags using marmot , a pruned higher-order crf . |
word alignment is the process of identifying wordto-word links between parallel sentences . | word alignment is a fundamental problem in statistical machine translation . |
sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) . | sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text . |
we use pre-trained glove embeddings to represent the words . | we use pre-trained vectors from glove for word-level embeddings . |
so in most cases of irony , such features will be useful for detection . | given much of the irony in tweets is sarcasm , looking at some of these features may be useful . |
that considers a word type and its allowed pos tags as a primary element of the model . | in this work , we take a more direct approach and treat a word type and its allowed pos tags as a primary element of the model . |
we use wordsim-353 , which contains 353 english word pairs with human similarity ratings . | specifically , we used wordsim353 , a benchmark dataset , consisting of relatedness judgments for 353 word pairs . |
mccarthy instead compares two semantic profiles in wordnet that contain the concepts corresponding to the nouns from the two argument positions . | in contrast to comparing head nouns directly , mccarthy instead compares the selectional preferences for each of the two slots . |
the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training . | we use the glove pre-trained word embeddings for the vectors of the content words . |
mann and yarowsky use semantic information that is extracted from documents to inform a hierarchical agglomerative clustering algorithm . | mann and yarowsky used semantic information extracted from documents referring to the target person in an hierarchical agglomerative clustering algorithm . |
twitter is a very popular micro blogging site . | twitter is a well-known social network service that allows users to post short 140 character status update which is called “ tweet ” . |
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm . | the parameter weights are optimized with minimum error rate training . |
in this paper , we propose a forest-based tree sequence to string model , which is designed to integrate the strengths of the forest-based and the tree . | to integrate their strengths , in this paper , we propose a forest-based tree sequence to string translation model . |
transliteration is a subtask in ne translation , which translates nes based on the phonetic similarity . | transliteration is often defined as phonetic translation ( cite-p-21-3-2 ) . |
in this paper , we discuss methods for automatically creating models of dialog structure . | in future work , we will assess the performance of dialog structure prediction on recognized speech . |
as a statistical significance test , we used bootstrap resampling . | we used bleu as our evaluation criteria and the bootstrapping method for significance testing . |
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . | we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding . |
the words in the document , question and answer are represented using pre-trained word embeddings . | the word embeddings are identified using the standard glove representations . |
relation extraction is the task of detecting and classifying relationships between two entities from text . | relation extraction is a fundamental task in information extraction . |
kobayashi et al identified opinion relations by searching for useful syntactic contextual clues . | kobayashi et al adopted a supervised learning technique to search for useful syntactic patterns as contextual clues . |
neural models have shown great success on a variety of tasks , including machine translation , image caption generation , and language modeling . | various models for learning word embeddings have been proposed , including neural net language models and spectral models . |
morphological disambiguation is the process of assigning one set of morphological features to each individual word in a text . | morphological disambiguation is the task of selecting the correct morphological parse for a given word in a given context . |
case-insensitive bleu4 was used as the evaluation metric . | all systems are evaluated using case-insensitive bleu . |
evaluation results show that our model clearly outperforms a number of baseline models in terms of both clustering posts . | the results show that our model can clearly outperform the baselines in terms of three evaluation metrics . |
modified kneser-ney trigram models are trained using srilm upon the chinese portion of the training data . | gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting . |
there are techniques for analyzing agreement when annotations involve segment boundaries , but our focus in this article is on words . | there are techniques for analyzing agreement when annotations involve segment boundaries , but our focus in this paper is on words . |
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus . | further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus . |
to reduce overfitting , we apply the dropout method to regularize our model . | to mitigate overfitting , we apply the dropout method to the inputs and outputs of the network . |
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm . | for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus . |
twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments . | twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events . |
the neural embeddings were created using the word2vec software 3 accompanying . | those models were trained using word2vec skip-gram and cbow . |
in this paper , we investigate unsupervised learning of field segmentation models . | in this work , we have examined the task of learning field segmentation models using unsupervised learning . |
neural machine translation is currently the state-of-the art paradigm for machine translation . | neural machine translation has recently become the dominant approach to machine translation . |
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing . | these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit . |
the constituent context model for inducing constituency parses was the first unsupervised approach to surpass a right-branching baseline . | the constituent-context model is the first unsupervised constituency grammar induction system that achieves better performance than the trivial right branching baseline for english . |
in this paper , we propose a procedure to train multi-domain , recurrent neural network-based ( rnn ) language generators via multiple adaptation . | the paper presents an incremental recipe for training multi-domain language generators based on a purely data-driven , rnn-based generation model . |
we use pre-trained word2vec word vectors and vector representations by tilk et al to obtain word-level similarity information . | we also used word2vec to generate dense word vectors for all word types in our learning corpus . |
the stochastic gradient descent with back-propagation is performed using adadelta update rule . | training is done through stochastic gradient descent over shuffled mini-batches with adadelta update rule . |
in the n-coalescent , every pair of lineages merges independently with rate 1 , with parents chosen uniformly at random from the set of possible parents . | in the n-coalescent , every pair of lineages merges independently with rate 1 , with parents chosen uniformly at random from the set of possible parents at the previous time step . |
in our approach is to allow highly flexible reordering operations , in combination with a discriminative model that can condition on rich features of the source-language input . | a critical difference in our work is to allow arbitrary reorderings of the source language sentence ( as in phrase-based systems ) , through the use of flexible parsing operations . |
we measure the translation quality using a single reference bleu . | we evaluated the translation quality of the system using the bleu metric . |
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing . | a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit . |
sentence compression is a standard nlp task where the goal is to generate a shorter paraphrase of a sentence . | sentence compression is the task of compressing long , verbose sentences into short , concise ones . |
wu presents a better-constrained grammar designed to only produce tail-recursive parses . | wu proposes a bilingual segmentation grammar extending the terminal rules by including phrase pairs . |
although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors . | coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world . |
we trained a tri-gram hindi word language model with the srilm tool . | we used the srilm toolkit to generate the scores with no smoothing . |
sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) . | sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text . |
we adopt two standard metrics rouge and bleu for evaluation . | for the evaluation of the results we use the bleu score . |
in this paper , we focus on designing a review generation model that is able to leverage both user and item information . | in this paper , we focus on the problem of building assistive systems that can help users to write reviews . |
the syntactic feature set is extracted after dependency parsing using the maltparser . | all data is automatically annotated with syntactic tags using maltparser . |
we used a bitext projection technique to transfer dependency-based opinion frames . | we propose a cross-lingual framework for fine-grained opinion mining using bitext projection . |
knowledge of our native language provides an initial foundation for second language learning . | our native language ( l1 ) plays an essential role in the process of lexical choice . |
semantic roles are approximated by propbank argument roles . | direction , manner , and purpose are propbank adjunctive argument labels . |
in this paper , we study the problem of sentiment analysis on product reviews . | in this paper , we propose a novel and effective approach to sentiment analysis on product reviews . |
in this paper we present an algorithmic framework which allows an automated acquisition of map-like information from the web , based on surface patterns . | in this paper we utilize a pattern-based lexical acquisition framework for the discovery of geographical information . |
circles denote events , squares denote arguments , solid arrows represent event-event relations , and dashed arrows represent event-argument relations . | the circles denote fixations , and the lines are saccades . |
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning . | semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation . |
each context consists of approximately a paragraph of surrounding text , where the word to be discriminated ( the target word ) is found approximately in the middle of the context . | 1 a context consists of all the patterns of n-grams within a certain window around the corresponding entity mention . |
we used the pre-trained google embedding to initialize the word embedding matrix . | in this baseline , we applied the word embedding trained by skipgram on wiki2014 . |
the word embeddings used in each neural network is initialized with the pre-trained glove with the dimension of 300 . | the word embeddings are initialized using the pre-trained glove , and the embedding size is 300 . |
in spite of this broad attention , the open ie task definition has been lacking . | in spite of this wide attention , open ie ’ s formal definition is lacking . |
neural network models have been exploited to learn dense feature representation for a variety of nlp tasks . | interestingly convolutional neural networks , widely used for image processing , have recently emerged as a strong class of models for nlp tasks . |
reordering is a difficult task in translating between widely different languages such as japanese and english . | reordering is a common problem observed in language pairs of distant language origins . |
we initialize our word vectors with 300-dimensional word2vec word embeddings . | our cdsm feature is based on word vectors derived using a skip-gram model . |
we define a conditional random field for this task . | our model is a first order linear chain conditional random field . |
End of preview. Expand
in Dataset Viewer.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Reformatted version of the ParaSCI dataset from ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation. Data retrieved from dqxiu/ParaSCI.
- Downloads last month
- 49