id
stringlengths 8
19
| text
stringlengths 2.27k
16.3k
| summary
stringlengths 356
582
|
---|---|---|
D18-1065 | You are an expert at summarizing long articles. Proceed to summarize the following text:
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy , accurate , and efficient attention mechanism for sequence to sequence learning . The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention . On five translation and two morphological inflection tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms . In structured input-output models as used in tasks like translation and image captioning , the attention variable decides which part of the input aligns to the current output . Many attention mechanisms have been proposed ( Xu et al . , 2015 ; Bahdanau et al . , 2014 ; Luong et al . , 2015 ; Martins and Astudillo , 2016 ) but the de facto standard is a soft attention mechanism that first assigns attention weights to input encoder states , then computes an attention weighted ' soft ' aligned input state , which finally derives the output distribution . This method is end to end differentiable and easy to implement . Another less popular variant is hard attention that aligns each output to exactly one input state but requires intricate training to teach the network to choose that state . When successfully trained , hard attention is often found to be more accurate ( Xu et al . , 2015 ; Zaremba and Sutskever , 2015 ) . In NLP , a recent success has been in a monotonic hard attention setting in morphological inflection tasks ( Yu et al . , 2016 ; Aharoni and Goldberg , 2017 ) . For general seq2seq learning , methods like Sparse-Max ( Martins and Astudillo , 2016 ) and local attention ( Luong et al . , 2015 ) were proposed to bridge the gap between soft and hard attention . * Both authors contributed equally to this work In this paper we propose a surprisingly simpler alternative based on the original joint distribution between output and attention , of which existing soft and hard attention mechanisms are approximations . The joint model couples input states individually to the output like in hard attention , but it combines the advantage of end-to-end trainability of soft attention . When the number of input states is large , we propose to use a simple approximation of the full joint distribution called Beam-joint . This approximation is also easily trainable and does not suffer from the high variance of Monte-Carlo sampling gradients of hard attention . We evaluated our model on five translation tasks and increased BLEU by 0.8 to 1.7 over soft attention , which in turn was better than hard and the recent Sparsemax ( Martins and Astudillo , 2016 ) attention . More importantly , the training process was as easy as soft attention . For further support , we also evaluate on two morphological inflection tasks and got gains over soft and hard attention . In this paper we showed a simple yet effective approximation of the joint attention-output distribution in sequence to sequence learning . Our joint model consistently provides higher accuracy without significant running time overheads in five translation and two morphological inflection tasks . An interesting direction for future work is to extend beam-joint to multi-head attention architectures as in ( Vaswani et al . , 2017 ; Xu Chen , 2018 ) . | Softmax attention models are popular because of their differentiable and easy to implement nature while hard attention models outperform them when successfully trained.
They propose a method to approximate the joint attention-output distribution which provides sharp attention as hard attention and easy implementation as soft attention.
The proposed approach outperforms soft attention models and recent hard attention and Sparsemax models on five translation tasks and also on morphological inflection tasks. |
2020.acl-main.282 | You are an expert at summarizing long articles. Proceed to summarize the following text:
The International Classification of Diseases ( ICD ) provides a standardized way for classifying diseases , which endows each disease with a unique code . ICD coding aims to assign proper ICD codes to a medical record . Since manual coding is very laborious and prone to errors , many methods have been proposed for the automatic ICD coding task . However , most of existing methods independently predict each code , ignoring two important characteristics : Code Hierarchy and Code Co-occurrence . In this paper , we propose a Hyperbolic and Co-graph Representation method ( HyperCore ) to address the above problem . Specifically , we propose a hyperbolic representation method to leverage the code hierarchy . Moreover , we propose a graph convolutional network to utilize the code co-occurrence . Experimental results on two widely used datasets demonstrate that our proposed model outperforms previous state-ofthe-art methods . The International Classification of Diseases ( ICD ) is a healthcare classification system supported by the World Health Organization , which provides a unique code for each disease , symptom , sign and so on . ICD codes have been widely used for analyzing clinical data and monitoring health issues ( Choi et al . , 2016 ; Avati et al . , 2018 ) . Due to the importance of ICD codes , ICD coding -which assigns proper ICD codes to a medical record -has drawn much attention . The task of ICD coding is usually undertaken by professional coders according to doctors ' diagnosis descriptions in the form of free texts . However , manual coding is very expensive , time-consuming and error-prone . The cost incurred by coding errors and the financial investment spent on improving coding quality are estimated to be $ 25 billion per year in the US ( Lang , 2007 ) . Two main reasons can account for this . First , only the people who have medical expert knowledge and specialized ICD coding skills can handle the task . However , it is hard to train such an eligible ICD coder . Second , it is difficult to correctly assign proper codes to the input document even for professional coders , because one document can be assigned multiple ICD codes and the number of codes in the taxonomy of ICD is large . For example , there are over 15,000 and 60,000 codes respectively in the ninth version ( ICD-9 ) and the tenth version ( ICD-10 ) of ICD taxonomies . To reduce human labor and coding errors , many methods have been carefully designed for automatic ICD coding ( Perotte et al . , 2013 ; Mullenbach et al . , 2018 ) . For example in Figure 1 , given the clinical text of a patient , the ICD coding model needs to automatically predict the corresponding ICD codes . The automatic ICD coding task can be modeled as a multi-label classification task since each clinical text is usually accompanied by mul- tiple codes . Most of the previous methods handle each code in isolation and convert the multi-label problem into a set of binary classification problems to predict whether each code of interest presents or not ( Mullenbach et al . , 2018 ; Rios and Kavuluru , 2018 ) . Though effective , they ignore two important characteristics : Code Hierarchy and Code Co-occurrence , which can be leveraged to improve coding accuracy . In the following , we will introduce the two characteristics and the reasons why they are critical for the automatic ICD coding . Code Hierarchy : Based on ICD taxonomy , ICD codes are organized under a tree-like hierarchical structure as shown in Figure 2 , which indicates the parent-child and sibling relations between codes . In the hierarchical structure , the upper level nodes represent more generic disease categories and the lower level nodes represent more specific diseases . The code hierarchy can capture the mutual exclusion of some codes . If code X and Y are both children of Z ( i.e. , X and Y are the siblings ) , it is unlikely to simultaneously assign X and Y to a patient in general ( Xie and Xing , 2018 ) . For example in Figure 2 , if code " 464.00 ( acute laryngitis without mention of obstruction ) " is assigned to a patient , it is unlikely to assign the code " 464.01 ( acute laryngitis with obstruction ) " to the patient at the same time . If automatic ICD coding models ignore such a characteristic , they are prone to giving inconsistent predictions . Thus , a challenging problem is how to model the code hierarchy and use it to capture the mutual exclusion of codes . Code Co-occurrence : Since some diseases are concurrent or have a causal relationship with each other , their codes usually co-occur in the clinical text , such as " 997.91 ( hypertension ) " and " 429.9 ( heart disease ) " . In this paper , we call such characteristic code co-occurrence which can capture the correlations of codes . The code co-occurrence can be utilized to correctly predict some codes which are difficult to predict by only using the clinical text itself . For example in Figure 1 , the code of " acute respiratory failure " can be easily inferred via capturing apparent clues ( i.e. , the green bold words ) from the text . Although there are also a few clues to infer the code of " acidosis " , they are very obscure , let alone predict the code of " acidosis " by only using these obscure clues . Fortunately , there is a strong association between these two diseases : one of the main causes of " acidosis " is " acute respiratory failure " . This prior knowledge can be captured via the fact that the codes of the two diseases usually co-occur in clinical texts . By considering the correlation , the automatic ICD coding model can better exploit obscure clues to predict the code of " acidosis " . Therefore , another problem is how to leverage code co-occurrence for ICD coding . In this paper , we propose a novel method termed as Hyperbolic and Co-graph Representation method ( HyperCore ) to address above problems . Since the tree-likeness properties of the hyperbolic space make it more suitable for representing symbolic data with hierarchical structures than the Euclidean space ( Nickel and Kiela , 2017 ) , we propose a hyperbolic representation learning method to learn the Code Hierarchy . Meanwhile , the graph has been proved effective in modeling data correlation and the graph convolutional network ( GCN ) enables to efficiently learn node representation ( Kipf and Welling , 2016 ) . Thus , we devise a code co-occurrence graph ( co-graph ) for capturing Code Co-occurrence and exploit the GCN to learn the code representation in the co-graph . The contributions of this paper are threefold . Firstly , to our best knowledge , this is the first work to propose a hyperbolic representation method to leverage the code hierarchy for automatic ICD coding . Secondly , this is also the first work to utilize a GCN to exploit code co-occurrence correlation for automatic ICD coding . Thirdly , experiments on two widely used automatic ICD coding datasets show that our proposed model outperforms previous state-of-the-art methods . In this paper , we propose a novel hyperbolic and cograph representation framework for the automatic ICD coding task , which can jointly exploit code hierarchy and code co-occurrence . We exploit the hyperbolic representation learning method to leverage the code hierarchy in the hyperbolic space . Moreover , we use the graph convolutional network to capture the co-occurrence correlation . Experimental results on two widely used datasets indicate that our proposed model outperforms previous state-ofthe-art methods . We believe our method can also be applied to other tasks that need to exploit hierarchical label structure and label co-occurrence , such as fine-grained entity typing and hierarchical multi-label classification . | Existing models that classify texts in medical records into the International Classification of Diseases reduce manual efforts however they ignore Code Hierarchy and Code Co-occurrence.
They propose a hyperbolic representation method to leverage the code hierarchy and a graph convolutional network to utilize the code-occurrence for automatic coding.
The proposed model outperforms state-of-the-art methods on two widely used datasets. |
P03-1015 | You are an expert at summarizing long articles. Proceed to summarize the following text:
The paper describes two parsing schemes : a shallow approach based on machine learning and a cascaded finite-state parser with a hand-crafted grammar . It discusses several ways to combine them and presents evaluation results for the two individual approaches and their combination . An underspecification scheme for the output of the finite-state parser is introduced and shown to improve performance . In several areas of Natural Language Processing , a combination of different approaches has been found to give the best results . It is especially rewarding to combine deep and shallow systems , where the former guarantees interpretability and high precision and the latter provides robustness and high recall . This paper investigates such a combination consisting of an n-gram based shallow parser and a cascaded finite-state parser 1 with hand-crafted grammar and morphological checking . The respective strengths and weaknesses of these approaches are brought to light in an in-depth evaluation on a treebank of German newspaper texts ( Skut et al . , 1997 ) containing ca . 340,000 tokens in 19,546 sentences . The evaluation format chosen ( dependency tuples ) is used as the common denominator of the systems 1 Although not everyone would agree that finite-state parsers constitute a ' deep ' approach to parsing , they still are knowledge-based , require efforts of grammar-writing , a complex linguistic lexicon , manage without training data , etc . in building a hybrid parser with improved performance . An underspecification scheme allows the finite-state parser partially ambiguous output . It is shown that the other parser can in most cases successfully disambiguate such information . Section 2 discusses the evaluation format adopted ( dependency structures ) , its advantages , but also some of its controversial points . Section 3 formulates a classification problem on the basis of the evaluation format and applies a machine learner to it . Section 4 describes the architecture of the cascaded finite-state parser and its output in a novel underspecification format . Section 5 explores several combination strategies and tests them on several variants of the two base components . Section 6 provides an in-depth evaluation of the component systems and the hybrid parser . Section 7 concludes . The paper has presented two approaches to German parsing ( n-gram based machine learning and cascaded finite-state parsing ) , and evaluated them on the basis of a large amount of data . A new representation format has been introduced that allows underspecification of select types of syntactic ambiguity ( attachment and subcategorization ) even in the absence of a full-fledged chart . Several methods have been discussed for combining the two approaches . It has been shown that while combination with the shallow approach can only marginally improve performance of the cascaded parser if ideal disambiguation is assumed , a quite substantial rise is registered in situations closer to the real world where POS tagging is deficient and resolution of attachment and subcategorization ambiguities less than perfect . In ongoing work , we look at integrating a statistic context-free parser called BitPar , which was written by Helmut Schmid and achieves .816 F-score on NEGRA . Interestingly , the performance goes up to .9474 F-score when BitPar is combined with the FS parser ( upper bound ) and .9443 for the lower bound . So at least for German , combining parsers seems to be a pretty good idea . Thanks are due to Helmut Schmid and Prof. C. Rohrer for discussions , and to the reviewers for their detailed comments . | Combining different methods often achieves the best results especially combinations of shallow and deep can realize both interpretability and good results.
They propose several ways to combine a machine learning-based shallow method and a hand-crafted grammar-based cascaded method for parsers.
Evaluations on a treebank of German newspaper texts show that the proposed method achieves substantial gain when there are ambiguities. |
P19-1352 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Word embedding is central to neural machine translation ( NMT ) , which has attracted intensive research interest in recent years . In NMT , the source embedding plays the role of the entrance while the target embedding acts as the terminal . These layers occupy most of the model parameters for representation learning . Furthermore , they indirectly interface via a soft-attention mechanism , which makes them comparatively isolated . In this paper , we propose shared-private bilingual word embeddings , which give a closer relationship between the source and target embeddings , and which also reduce the number of model parameters . For similar source and target words , their embeddings tend to share a part of the features and they cooperatively learn these common representation units . Experiments on 5 language pairs belonging to 6 different language families and written in 5 different alphabets demonstrate that the proposed model provides a significant performance boost over the strong baselines with dramatically fewer model parameters . With the introduction of ever more powerful architectures , neural machine translation ( NMT ) has become the most promising machine translation method ( Kalchbrenner and Blunsom , 2013 ; Sutskever et al . , 2014 ; Bahdanau et al . , 2015 ) . For word representation , different architecturesincluding , but not limited to , recurrence-based ( Chen et al . , 2018 ) , convolution-based ( Gehring et al . , 2017 ) and transformation-based ( Vaswani et al . , 2017 ) NMT models-have been taking advantage of the distributed word embeddings to capture the syntactic and semantic properties of words ( Turian et al . , 2010 ) . Figure 1 : Comparison between ( a ) standard word embeddings and ( b ) shared-private word embeddings . In ( a ) , the English word " Long " and the German word " Lange " , which have similar lexical meanings , are represented by two private d-dimension vectors . While in ( b ) , the two word embeddings are made up of two parts , indicating the shared ( lined nodes ) and the private ( unlined nodes ) features . This enables the two words to make use of common representation units , leading to a closer relationship between them . NMT usually utilizes three matrices to represent source embeddings , target input embeddings , and target output embeddings ( also known as pre-softmax weight ) , respectively . These embeddings occupy most of the model parameters , which constrains the improvements of NMT because the recent methods become increasingly memory-hungry ( Vaswani et al . , 2017 ; Chen et al . , 2018 ) . 1 Even though converting words into subword units ( Sennrich et al . , 2016b ) , nearly 55 % of model parameters are used for word representation in the Transformer model ( Vaswani et al . , 2017 ) . To overcome this difficulty , several methods are proposed to reduce the parameters used for word representation of NMT . Press and Wolf ( 2017 ) propose two weight tying ( WT ) methods , called decoder WT and three-way WT , to substantially reduce the parameters of the word embeddings . Decoder WT ties the target input embedding and target output embedding , which has become the new de facto standard of practical NMT ( Sen- Figure 2 : Shared-private bilingual word embeddings perform between the source and target words or sub-words ( a ) with similar lexical meaning , ( b ) with same word form , and ( c ) without any relationship . Different sharing mechanisms are adapted into different relationship categories . This strikes the right balance between capturing monolingual and bilingual characteristics . The closeness of relationship decides the portion of features to be used for sharing . Words with similar lexical meaning tend to share more features , followed by the words with the same word form , and then the unrelated words , as illustrated by the lined nodes . nrich et al . , 2017 ) . Three-way WT uses only one matrix to represent the three word embeddings , where the source and target words that have the same word form tend to share a word vector . This method can also be adapted to sub-word NMT with a shared source-target sub-word vocabulary and it performs well in language pairs with many of the same characters , such as English-German and English-French ( Vaswani et al . , 2017 ) . Unfortunately , this method is not applicable to languages that are written in different alphabets , such as Chinese-English ( Hassan et al . , 2018 ) . Another challenge facing the source and target word embeddings of NMT is the lack of interactions . This degrades the attention performance , leading to some unaligned translations that hurt the translation quality . Hence , Kuang et al . ( 2018 ) propose to bridge the source and target embeddings , which brings better attention to the related source and target words . Their method is applicable to any language pairs , providing a tight interaction between the source and target word pairs . However , their method requires additional components and model parameters . In this work , we aim to enhance the word representations and the interactions between the source and target words , while using even fewer parameters . To this end , we present a languageindependent method , which is called sharedprivate bilingual word embeddings , to share a part of the embeddings of a pair of source and target words that have some common characteristics ( i.e. similar words should have similar vectors ) . Figure 1 illustrates the difference between the standard word embeddings and shared-private word embeddings of NMT . In the proposed method , each source ( or target ) word is represented by a word embedding that consists of the shared features and the private features . The shared features can also be regarded as the prior alignments connecting the source and target words . The private features allow the words to better learn the monolingual characteristics . Meanwhile , the features shared by the source and target embeddings result in a significant reduction of the number of parameters used for word representations . The experimental results on 6 translation datasets of different scales show that our model with fewer parameters yields consistent improvements over the strong Transformer baselines . In this work , we propose a novel sharing technique to improve the learning of word embeddings for NMT . Each word embedding is composed of shared and private features . The shared features act as a prior alignment guidance for the attention model to improve the quality of attention . Meanwhile , the private features enable the words to better capture the monolingual characteristics , result in an improvement of the overall translation quality . According to the degree of relevance between a parallel word pair , the word pairs are categorized into three different groups and the number of shared features is different . Our experimental results show that the proposed method outperforms the strong Transformer baselines while using fewer model parameters . | Word embeddings occupy a large amount of memory, and weight tying does not mitigate this issue for distant language pairs on translation tasks.
They propose a language independet method where a model shares embeddings between source and target only when words have some common characteristics.
Experiments on machine translation datasets involving multiple language families and scripts show that the proposed model outperforms baseline models while using fewer parameters. |
2021.emnlp-main.66 | You are an expert at summarizing long articles. Proceed to summarize the following text:
This paper proposes to study a fine-grained semantic novelty detection task , which can be illustrated with the following example . It is normal that a person walks a dog in the park , but if someone says " A man is walking a chicken in the park , " it is novel . Given a set of natural language descriptions of normal scenes , we want to identify descriptions of novel scenes . We are not aware of any existing work that solves the problem . Although existing novelty or anomaly detection algorithms are applicable , since they are usually topic-based , they perform poorly on our fine-grained semantic novelty detection task . This paper proposes an effective model ( called GAT-MA ) to solve the problem and also contributes a new dataset . Experimental evaluation shows that GAT-MA outperforms 11 baselines by large margins . Novelty or anomaly detection has been an important research topic since 1970s ( Barnett and Lewis , 1994 ) due to numerous applications ( Chalapathy et al . , 2018 ; Pang et al . , 2021 ) . Recently , it has also become important for natural language processing ( NLP ) . Many researchers have studied the problem in the text classification setting ( Fei and Liu , 2016 ; Shu et al . , 2017 ; Xu et al . , 2019 ; Lin and Xu , 2019 ; Zheng et al . , 2020 ) . However , these text novelty classifiers are mainly coarse-grained , working at the document or topic level . Given a text document , their goal is to detect whether the text belongs to a known class or unknown class . This paper introduces a new text novelty detection problem -fine-grained semantic novelty detection . Specifically , given a text description d , we detect whether d represents a semantically novel fact or not . This work considers text data that describe scenes of real-world phenomena in natural language ( NL ) . In our daily lives , we observe different real-world phenomena ( events , activities , situations , etc . ) and often describe these observations ( referred as " scenes " onwards ) in NL to others or write about them . It is quite natural to observe scenes that we have not seen before ( i.e. , novel scenes ) . For example , it is a common scene that " A person walks a dog in the park " , but if someone says " A man is walking a chicken in the park " , it is quite unexpected and novel . Detecting such semantic novelty requires complex conceptual and semantic reasoning over text and thus , is a challenging NLP problem . Note that conceptually , the judgement of the novelty of a scene is subjective and might differ from person to person . However , there are some scenes for which a majority of people have agreement about their novelty . A good example of such majority-view of novelty is the widely-spread meme pictures on social media , which contain novel interactions between objects . In this work , we restrict our research to this majority-based view of novelty and leave the personalized novelty view angle for the future work . In this work , we leverage the captions of images from popular datasets like COCO , Flickr , etc . , to build a semantic novelty detection dataset ( Sec . 3 ) , 1where we consider an image as a scene and the corresponding image captions as different NL descriptions of the scene . Detecting text describing semantically novel observations have many applications , e.g. , recommending novel news , novel images & videos ( based on their text descriptions ) , social media posts and conversations . The problem of semantic novelty detection is defined as follows . Problem Definition : Given a set of natural language descriptions D = { d 1 , d 2 , ..... d n } of common scenes , build a model M using D to score the semantic novelty of a test NL description d with respect to D , i.e. , classifying d into one of the two classes { NORMAL , NOVEL } . " NORMAL " means that d is a description of a common scene and " NOVEL " means d is a description of a se-mantically novel scene . As the detection model M is built only with " NORMAL " class data , the task is an one-class text classification problem . We are unaware of any existing work that can effectively solve this problem . Although existing novelty / anomaly detection and one-class classification algorithms are applicable , since they are coarse-grained or topic-based , they perform poorly on our task ( see Sec . 5 ) . Note that although we focus on solving the problem of semantic novelty detection of NL descriptions of scenes , the proposed task and solution framework are generally applicable to other applications . This paper proposes a new technique , called GAT-MA ( Graph Attention network with Max-Margin loss and knowledge-based contrastive data Augmentation ) to identify NL description sentences of novel scenes . Since our task is at the sentence level and fine-grained , we exploit Graph Attention Network ( GAT ) on the parsed dependency graph of each sentence , which fuses both semantic and syntactic information in the sentence for reasoning with the internal interactions of entities and actions . To enable the model to capture long-range interactions , we stack multiple layers of GATs to build a deep GAT model with multi-hop graph attention . We also create the pseudo novel training data based on the given normal training data through contrastive data augmentation . Thus , GAT-MA is trained with the given original normal scene descriptions and the augmented pseudo novel scene descriptions ( Sec . 4 ) . GAT-MA is evaluated using our newly created Novel Scene Description Detection ( NSD2 ) Dataset . The results show that GAT-MA outperforms a wide range of latest novelty or anomaly detection baselines by very large margins . Our main contributions are as follows : 1 . We propose a new task of semantic novelty detection in text . Whereas the existing work focuses on coarse-grained document-or topiclevel novelty , our task requires fine-grained sentence-level semantic & syntactic analysis . 2 . We propose a highly effective technique called GAT-MA to solve the proposed semantic novelty detection problem , which is based on GAT with dependency parsing and knowledgebased contrastive data augmentation . 3 . We create a new dataset called NSD2 for the proposed task . The dataset can be used as a benchmark dataset by the NLP community . Novelty detection is an important problem because anything novel is of interest . This paper proposed a semantic novelty detection problem and designed a graph attention network based approach ( called GAT-MA ) exploiting parsing and data augmentation to solve the problem . As there is no existing evaluation dataset for the proposed task , an evaluation dataset has been created . Experimental comparisons with a wide range of baselines showed that GAT-MA outperforms them by very large margins . | Existing works on novelty or abnormally detection are coarse-grained only focusing on the document or sentence level as a text classification task.
They propose a fine-grained semantic novelty detection problem where systems detect whether a textual description is a novel fact, coupled with a graph attention-based model.
The proposed model outperforms 11 baseline models on the created dataset from an image caption dataset for the proposed task by large margins. |
P98-1081 | You are an expert at summarizing long articles. Proceed to summarize the following text:
In this paper we examine how the differences in modelling between different data driven systems performing the same NLP task can be exploited to yield a higher accuracy than the best individual system . We do this by means of an experiment involving the task of morpho-syntactic wordclass tagging . Four well-known tagger generators ( Hidden Markov Model , Memory-Based , Transformation Rules and Maximum Entropy ) are trained on the same corpus data . After comparison , their outputs are combined using several voting strategies and second stage classifiers . All combination taggers outperform their best component , with the best combination showing a 19.1 % lower error rate than the best individual tagger . In all Natural Language Processing ( NLP ) systems , we find one or more language models which are used to predict , classify and/or interpret language related observations . Traditionally , these models were categorized as either rule-based / symbolic or corpusbased / probabilistic . Recent work ( e.g. Brill 1992 ) has demonstrated clearly that this categorization is in fact a mix-up of two distinct Categorization systems : on the one hand there is the representation used for the language model ( rules , Markov model , neural net , case base , etc . ) and on the other hand the manner in which the model is constructed ( hand crafted vs. data driven ) . Data driven methods appear to be the more popular . This can be explained by the fact that , in general , hand crafting an explicit model is rather difficult , especially since what is being modelled , natural language , is not ( yet ) wellunderstood . When a data driven method is used , a model is automatically learned from the implicit structure of an annotated training corpus . This is much easier and can quickly lead to a model which produces results with a ' reasonably ' good quality . Obviously , ' reasonably good quality ' is not the ultimate goal . Unfortunately , the quality that can be reached for a given task is limited , and not merely by the potential of the learning method used . Other limiting factors are the power of the hard-and software used to implement the learning method and the availability of training material . Because of these limitations , we find that for most tasks we are ( at any point in time ) faced with a ceiling to the quality that can be reached with any ( then ) available machine learning system . However , the fact that any given system can not go beyond this ceiling does not mean that machine learning as a whole is similarly limited . A potential loophole is that each type of learning method brings its own ' inductive bias ' to the task and therefore different methods will tend to produce different errors . In this paper , we are concerned with the question whether these differences between models can indeed be exploited to yield a data driven model with superior performance . In the machine learning literature this approach is known as ensemble , stacked , or combined classifiers . It has been shown that , when the errors are uncorrelated to a sufficient degree , the resulting combined classifier will often perform better than all the individual systems ( Ali and Pazzani 1996 ; Chan and Stolfo 1995 ; Tumer and Gosh 1996 ) . The underlying assumption is twofold . First , the combined votes will make the system more robust to the quirks of each learner 's particular bias . Also , the use of information about each individual method 's behaviour in principle even admits the possibility to fix collective errors . We will execute our investigation by means of an experiment . The NLP task used in the experiment is morpho-syntactic wordclass tagging . The reasons for this choice are several . First of all , tagging is a widely researched and well-understood task ( cf . van Halteren ( ed . ) 1998 ) . Second , current performance levels on this task still leave room for improvement : ' state of the art ' performance for data driven automatic wordclass taggers ( tagging English text with single tags from a low detail tagset ) is 96 - 97 % correctly tagged words . Finally , a number of rather different methods are available that generate a fully functional tagging system from annotated text . Our experiment shows that , at least for the task at hand , combination of several different systems allows us to raise the performance ceiling for data driven systems . Obviously there is still room for a closer examination of the differences between the combination methods , e.g. the question whether Memory-Based combination would have performed better if we had provided more training data than just Tune , and of the remaining errors , e.g. the effects of inconsistency in the data ( cf . Ratnaparkhi 1996 on such effects in the Penn Treebank corpus ) . Regardless of such closer investigation , we feel that our results are encouraging enough to extend our investigation of combination , starting with additional component taggers and selection strategies , and going on to shifts to other tagsets and/or languages . But the investigation need not be limited to wordclass tagging , for we expect that there are many other NLP tasks where combination could lead to worthwhile improvements . | Different data driven approaches tend to produce different errors and their qualities are limited due to the learning method and available training material.
They propose to combine four different modelling methods for the task of morpho-syntactic wordclass tagging by using several voting strategies and second stage classifiers.
All combinations outperform the best component, with the best one showing a 19.1% lower error rate and raising the performance ceiling. |
N03-1024 | You are an expert at summarizing long articles. Proceed to summarize the following text:
We describe a syntax-based algorithm that automatically builds Finite State Automata ( word lattices ) from semantically equivalent translation sets . These FSAs are good representations of paraphrases . They can be used to extract lexical and syntactic paraphrase pairs and to generate new , unseen sentences that express the same meaning as the sentences in the input sets . Our FSAs can also predict the correctness of alternative semantic renderings , which may be used to evaluate the quality of translations . In the past , paraphrases have come under the scrutiny of many research communities . Information retrieval researchers have used paraphrasing techniques for query reformulation in order to increase the recall of information retrieval engines ( Sparck Jones and Tait , 1984 ) . Natural language generation researchers have used paraphrasing to increase the expressive power of generation systems ( Iordanskaja et al . , 1991 ; Lenke , 1994 ; Stede , 1999 ) . And researchers in multi-document text summarization ( Barzilay et al . , 1999 ) , information extraction ( Shinyama et al . , 2002 ) , and question answering ( Lin and Pantel , 2001 ; Hermjakob et al . , 2002 ) have focused on identifying and exploiting paraphrases in the context of recognizing redundancies , alternative formulations of the same meaning , and improving the performance of question answering systems . In previous work ( Barzilay and McKeown , 2001 ; Lin and Pantel , 2001 ; Shinyama et al . , 2002 ) , paraphrases are represented as sets or pairs of semantically equivalent words , phrases , and patterns . Although this is adequate in the context of some applications , it is clearly too weak from a generative perspective . Assume , for example , that we know that text pairs ( stock market rose , stock market gained ) and ( stock market rose , stock prices rose ) have the same meaning . If we memorized only these two pairs , it would be impossible to infer that , in fact , consistent with our intuition , any of the following sets of phrases are also semantically equivalent : { stock market rose , stock market gained , stock prices rose , stock prices gained } and { stock market , stock prices } in the context of rose or gained ; { market rose } , { market gained } , { prices rose } and { prices gained } in the context of stock ; and so on . In this paper , we propose solutions for two problems : the problem of paraphrase representation and the problem of paraphrase induction . We propose a new , finite-statebased representation of paraphrases that enables one to encode compactly large numbers of paraphrases . We also propose algorithms that automatically derive such representations from inputs that are now routinely released in conjunction with large scale machine translation evaluations ( DARPA , 2002 ) : multiple English translations of many foreign language texts . For instance , when given as input the 11 semantically equivalent English translations in Figure 1 , our algorithm automatically induces the FSA in Figure 2 , which represents compactly 49 distinct renderings of the same semantic meaning . Our FSAs capture both lexical paraphrases , such as { fighting , bat-tle } , { died , were killed } and structural paraphrases such as { last week 's fighting , the battle of last week } . The contexts in which these are correct paraphrases are also conveniently captured in the representation . In previous work , Langkilde and Knight ( 1998 ) used word lattices for language generation , but their method involved hand-crafted rules . Bangalore et al . ( 2001 ) and Barzilay and Lee ( 2002 ) both applied the technique of multi-sequence alignment ( MSA ) to align parallel corpora and produced similar FSAs . For their purposes , they mainly need to ensure the correctness of consensus among different translations , so that different constituent orderings in input sentences do not pose a serious prob- lem . In contrast , we want to ensure the correctness of all paths represented by the FSAs , and direct application of MSA in the presence of different constituent orderings can be problematic . For example , when given as input the same sentences in Figure 1 , one instantiation of the MSA algorithm produces the FSA in Figure 3 , which contains many " bad " paths such as the battle of last week 's fighting took at least 12 people lost their people died in the fighting last week 's fighting ( See Section 4.2.2 for a more quantitative analysis . ) . It 's still possible to use MSA if , for example , the input is pre-clustered to have the same constituent ordering ( Barzilay and Lee ( 2003 ) ) . But we chose to approach this problem from another direction . As a result , we propose a new syntax-based algorithm to produce FSAs . In this paper , we first introduce the multiple translation corpus that we use in our experiments ( see Section 2 ) . We then present the algorithms that we developed to induce finite-state paraphrase representations from such data ( see Section 3 ) . An important part of the paper is dedicated to evaluating the quality of the finite-state representations that we derive ( see Section 4 ) . Since our representations encode thousands and sometimes millions of equivalent verbalizations of the same meaning , we use both manual and automatic evaluation techniques . Some of the automatic evaluations we perform are novel as well . In this paper , we presented a new syntax-based algorithm that learns paraphrases from a newly available dataset . The multiple translation corpus that we use in this paper is the first instance in a series of similar corpora that are built and made publicly available by LDC in the context of a series of DARPA-sponsored MT evaluations . The algorithm we proposed constructs finite state representations of paraphrases that are useful in many contexts : to induce large lists of lexical and structural paraphrases ; to generate semantically equivalent renderings of a given meaning ; and to estimate the quality of machine translation systems . More experiments need to be carried out in order to assess extrinsically whether the FSAs we produce can be used to yield higher agreement scores between human and automatic assessments of translation quality . In our future work , we wish to experiment with more flexible merging algorithms and to integrate better the top-down and bottom-up processes that are used to in-duce FSAs . We also wish to extract more abstract paraphrase patterns from the current representation . Such patterns are more likely to get reused -which would help us get reliable statistics for them in the extraction phase , and also have a better chance of being applicable to unseen data . | Existing approaches to represent paraphrases as sets or pairs of semantically equivalent words, phrases and patterns that are weak for text generation purposes.
They propose a syntax-based algorithm that builds Finite State Automata from translation sets which are good representations of paraphrases.
Manual and automatic evaluations show that the representations extracted by the proposed method can be used for automatic translation evaluations. |
P01-1026 | You are an expert at summarizing long articles. Proceed to summarize the following text:
We propose a method to generate large-scale encyclopedic knowledge , which is valuable for much NLP research , based on the Web . We first search the Web for pages containing a term in question . Then we use linguistic patterns and HTML structures to extract text fragments describing the term . Finally , we organize extracted term descriptions based on word senses and domains . In addition , we apply an automatically generated encyclopedia to a question answering system targeting the Japanese Information-Technology Engineers Examination . Reflecting the growth in utilization of the World Wide Web , a number of Web-based language processing methods have been proposed within the natural language processing ( NLP ) , information retrieval ( IR ) and artificial intelligence ( AI ) communities . A sample of these includes methods to extract linguistic resources ( Fujii and Ishikawa , 2000 ; Resnik , 1999 ; Soderland , 1997 ) , retrieve useful information in response to user queries ( Etzioni , 1997 ; McCallum et al . , 1999 ) and mine / discover knowledge latent in the Web ( Inokuchi et al . , 1999 ) . In this paper , mainly from an NLP point of view , we explore a method to produce linguistic resources . Specifically , we enhance the method proposed by Fujii and Ishikawa ( 2000 ) , which extracts encyclopedic knowledge ( i.e. , term descriptions ) from the Web . In brief , their method searches the Web for pages containing a term in question , and uses linguistic expressions and HTML layouts to extract fragments describing the term . They also use a language model to discard non-linguistic fragments . In addition , a clustering method is used to divide descriptions into a specific number of groups . On the one hand , their method is expected to enhance existing encyclopedias , where vocabulary size is relatively limited , and therefore the quantity problems has been resolved . On the other hand , encyclopedias extracted from the Web are not comparable with existing ones in terms of quality . In hand-crafted encyclopedias , term descriptions are carefully organized based on domains and word senses , which are especially effective for human usage . However , the output of Fujii 's method is simply a set of unorganized term descriptions . Although clustering is optionally performed , resultant clusters are not necessarily related to explicit criteria , such as word senses and domains . To sum up , our belief is that by combining extraction and organization methods , we can enhance both quantity and quality of Web-based encyclopedias . Motivated by this background , we introduce an organization model to Fujii 's method and reformalize the whole framework . In other words , our proposed method is not only extraction but generation of encyclopedic knowledge . Section 2 explains the overall design of our encyclopedia generation system , and Section 3 elaborates on our organization model . Section 4 then explores a method for applying our resultant encyclopedia to NLP research , specifically , question answering . Section 5 performs a number of experiments to evaluate our methods . The World Wide Web has been an unprecedentedly enormous information source , from which a number of language processing methods have been explored to extract , retrieve and discover various types of information . In this paper , we aimed at generating encyclopedic knowledge , which is valuable for many applications including human usage and natural language understanding . For this purpose , we reformalized an existing Web-based extraction method , and proposed a new statistical organization model to improve the quality of extracted data . Given a term for which encyclopedic knowledge ( i.e. , descriptions ) is to be generated , our method sequentially performs a ) retrieval of Web pages contain-ing the term , b ) extraction of page fragments describing the term , and c ) organizing extracted descriptions based on domains ( and consequently word senses ) . In addition , we proposed a question answering system , which answers interrogative questions associated with what , by using a Web-based encyclopedia as a knowledge base . For the purpose of evaluation , we used as test inputs technical terms collected from the Class II IT engineers examination , and found that the encyclopedia generated through our method was of operational quality and quantity . We also used test questions from the Class II examination , and evaluated the Web-based encyclopedia in terms of question answering . We found that our Webbased encyclopedia improved the system coverage obtained solely with an existing dictionary . In addition , when we used both resources , the performance was further improved . Future work would include generating information associated with more complex interrogations , such as ones related to how and why , so as to enhance Webbased natural language understanding . | Existing methods that extract encyclopedic knowledge from the Web output unorganized clusters of term descriptions not necessarily related to explicit criteria while clustering is performed.
They propose to use word senses and domains for organizing extracted term descriptions on questions extracted from the Web to improve the quality.
The generated encyclopedia is applied to a Japanese question and answering system and improves over a system which solely depends on a dictionary. |
E09-1032 | You are an expert at summarizing long articles. Proceed to summarize the following text:
We explore the problem of resolving the second person English pronoun you in multi-party dialogue , using a combination of linguistic and visual features . First , we distinguish generic and referential uses , then we classify the referential uses as either plural or singular , and finally , for the latter cases , we identify the addressee . In our first set of experiments , the linguistic and visual features are derived from manual transcriptions and annotations , but in the second set , they are generated through entirely automatic means . Results show that a multimodal system is often preferable to a unimodal one . The English pronoun you is the second most frequent word in unrestricted conversation ( after I and right before it ) . 1 Despite this , with the exception of Gupta et al . ( 2007b ; 2007a ) , its resolution has received very little attention in the literature . This is perhaps not surprising since the vast amount of work on anaphora and reference resolution has focused on text or discourse -mediums where second-person deixis is perhaps not as prominent as it is in dialogue . For spoken dialogue pronoun resolution modules however , resolving you is an essential task that has an important impact on the capabilities of dialogue summarization systems . Besides being important for computational implementations , resolving you is also an interesting and challenging research problem . As for third person pronouns such as it , some uses of you are not strictly referential . These include discourse marker uses such as you know in example ( 1 ) , and generic uses like ( 2 ) , where you does not refer to the addressee as it does in ( 3 ) . ( 1 ) It 's not just , you know , noises like something hitting . ( 2 ) Often , you need to know specific button sequences to get certain functionalities done . ( 3 ) I think it 's good . You 've done a good review . However , unlike it , you is ambiguous between singular and plural interpretations -an issue that is particularly problematic in multi-party conversations . While you clearly has a plural referent in ( 4 ) , in ( 3 ) the number of its referent is ambiguous . 2(4 ) I do n't know if you guys have any questions . When an utterance contains a singular referential you , resolving the you amounts to identifying the individual to whom the utterance is addressed . This is trivial in two-person dialogue since the current listener is always the addressee , but in conversations with multiple participants , it is a complex problem where different kinds of linguistic and visual information play important roles ( Jovanovic , 2007 ) . One of the issues we investigate here is how this applies to the more concrete problem of resolving the second person pronoun you . We approach this issue as a three-step problem . Using the AMI Meeting Corpus ( McCowan et al . , 2005 ) of multi-party dialogues , we first discriminate between referential and generic uses of you . Then , within the referential uses , we distinguish between singular and plural , and finally , we resolve the singular referential instances by identifying the intended addressee . We use multimodal features : initially , we extract discourse features from manual transcriptions and use visual information derived from manual annotations , but then we move to a fully automatic approach , using 1-best transcriptions produced by an automatic speech recognizer ( ASR ) and visual features automatically extracted from raw video . In the next section of this paper , we give a brief overview of related work . We describe our data in Section 3 , and explain how we extract visual and linguistic features in Sections 4 and 5 respectively . Section 6 then presents our experiments with manual transcriptions and annotations , while Section 7 , those with automatically extracted information . We end with conclusions in Section 8 . We have investigated the automatic resolution of the second person English pronoun you in multi-party dialogue , using a combination of linguistic and visual features . We conducted a first set of experiments where our features were derived from manual transcriptions and annotations , and then a second set where they were generated by entirely automatic means . To our knowledge , this is the first attempt at tackling this problem using automatically extracted multimodal information . Our experiments showed that visual information can be highly predictive in resolving the addressee of singular referential uses of you . Visual features significantly improved the performance of both our manual and automatic systems , and the latter achieved an encouraging 75 % accuracy . We also found that our visual features had predictive power for distinguishing between generic and referential uses of you , and between referential singulars and plurals . Indeed , for the latter task , they significantly improved the manual system 's performance . The listeners ' gaze features were useful here : in our data set it was apparently the case that the speaker would often use the whiteboard / projector screen when addressing the group , thus drawing the listeners ' gaze in this direction . Future work will involve expanding our dataset , and investigating new potentially predictive features . In the slightly longer term , we plan to integrate the resulting system into a meeting assistant whose purpose is to automatically extract useful information from multi-party meetings . | Although the word "you" is frequently used and has several possible meanings, such as reference or generic, it is not well studied yet.
They first manually automatically separate the word "you" between generic and referential, then later use a multimodal system for automation.
They show that visual features can help distinguish the word "you" in multi-party conversations. |
N13-1083 | You are an expert at summarizing long articles. Proceed to summarize the following text:
We investigate two systems for automatic disfluency detection on English and Mandarin conversational speech data . The first system combines various lexical and prosodic features in a Conditional Random Field model for detecting edit disfluencies . The second system combines acoustic and language model scores for detecting filled pauses through constrained speech recognition . We compare the contributions of different knowledge sources to detection performance between these two languages . Speech disfluencies are common phenomena in spontaneous speech . They consist of spoken words and phrases that represent self-correction , hesitation , and floor-grabbing behaviors , but do not add semantic information ; removing them yields the intended , fluent utterance . The presence of disfluencies in conversational speech data can cause problems for both downstream processing ( parsing and other natural language processing tasks ) and human readability of speech transcripts . There has been much research effort on automatic disfluency detection in recent years ( Shriberg and Stolcke , 1997 ; Snover et al . , 2004 ; Liu et al . , 2006 ; Lin and Lee , 2009 ; Schuler et al . , 2010 ; Georgila et al . , 2010 ; Zwarts and Johnson , 2011 ) , particularly from the DARPA EARS ( Effective , Affordable , Reusable Speech-to-Text ) MDE ( MetaData Extraction ) ( DARPA Information Processing Technology Office , 2003 ) program , which focused on the automatic transcription of sizable amounts of speech data and rendering such transcripts in readable form , for both conversational telephone speech ( CTS ) and broadcast news ( BN ) . However , the EARS MDE effort was focused on English only , and there has n't been much research on the effectiveness of similar automatic disfluency detection approaches for multiple languages . This paper presents three main innovations . First , we extend the EARS MDE-style disfluency detection approach combining lexical and prosodic features using a Conditional Random Field ( CRF ) model , which was employed for detecting disfluency on English conversational speech data ( Liu et al . , 2005 ) , to Mandarin conversational speech , as presented in Section 2 . Second , we implement an automatic filled pause detection approach through constrained speech recognition , as presented in Section 3 . Third , for both disfluency detection systems , we compare side-by-side contributions of different knowledge sources to detection performance for two languages , English and Mandarin , as presented in Section 4 . Conclusions appear in Section 5 . In conclusion , we have presented two automatic disfluency detection systems , one combining various lexical and prosodic features , and the other combining LVCSR acoustic and language model knowledge sources . We observed significant improvements in combining lexical and prosodic features over just employing word n-gram features , for both languages . When combining AM and LM knowledge sources for FP detection in constrained speech recognition , we found increasing LM weight improved both false alarm and miss rates for Mandarin but degraded the miss rate for English . | Existing works on detecting speech disfluency which can be a problem for downstream processing and creating transcripts only focus on English.
They evaluate a Conditional Random Field-based edit disfluency detection model and a system which combines acoustic and language model that detects filled pauses in Mandarin.
Their system comparisons in English and Mandarin show that combining lexical and prosodic features achieves improvements in both languages. |
D16-1205 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words . Current DSMs , however , represent context words as separate features , thereby loosing important information for word expectations , such as word interrelations . In this paper , we present a DSM that addresses this issue by defining verb contexts as joint syntactic dependencies . We test our representation in a verb similarity task on two datasets , showing that joint contexts achieve performances comparable to single dependencies or even better . Moreover , they are able to overcome the data sparsity problem of joint feature spaces , in spite of the limited size of our training corpus . Distributional Semantic Models ( DSMs ) rely on the Distributional Hypothesis ( Harris , 1954 ; Sahlgren , 2008 ) , stating that words occurring in similar contexts have similar meanings . On such theoretical grounds , word co-occurrences extracted from corpora are used to build semantic representations in the form of vectors , which have become very popular in the NLP community . Proximity between word vectors is taken as an index of meaning similarity , and vector cosine is generally adopted to measure such proximity , even though other measures have been proposed ( Weeds et al . , 2004 ; Santus et al . , 2016 ) . Most of DSMs adopt a bag-of-words approach , that is they turn a text span ( i.e. , a word window or a parsed sentence ) into a set of words and they register separately the co-occurrence of each word with a given target . The problem with this approach is that valuable information concerning word interrelations in a context gets lost , because words co-occurring with a target are treated as independent features . This is why works like Ruiz-Casado et al . ( 2005 ) , Agirre et al . ( 2009 ) and Melamud et al . ( 2014 ) proposed to introduce richer contexts in distributional spaces , by using entire word windows as features . These richer contexts proved to be helpful to semantically represent verbs , which are characterized by highly context-sensitive meanings , and complex argument structures . In fact , two verbs may share independent words as features despite being very dissimilar from the semantic point of view . For instance kill and heal share the same object nouns in The doctor healed the patient and the The poison killed the patient , but are highly different if we consider their joint dependencies as a single context . Nonetheless , richer contexts like these suffer from data sparsity , therefore requiring either larger corpora or complex smoothing processes . In this paper , we propose a syntactically savvy notion of joint contexts . To test our representation , we implement several DSMs and we evaluate them in a verb similarity task on two datasets . The results show that , even using a relatively small corpus , our syntactic joint contexts are robust with respect to data sparseness and perform similarly or better than single dependencies in a wider range of parameter settings . The paper is organized as follows . In Section 2 , we provide psycholinguistic and computational background for this research , describing recent models based on word windows . In Section 3 , we describe our reinterpretation of joint contexts with syntactic dependencies . Evaluation settings and results are presented in Section 4 . In this paper , we have presented our proposal for a new type of vector representation based on joint features , which should emulate more closely the general knowledge about event participants that seems to be the organizing principle of our mental lexicon . A core issue of previous studies was the data sparseness challenge , and we coped with it by means of a more abstract , syntactic notion of joint context . The models using joint dependencies were able at least to perform comparably to traditional , dependency-based DSMs . In our experiments , they even achieved the best correlation scores across several parameter settings , especially after the application of SVD . We want to emphasize that previous works such as Agirre et al . ( 2009 ) already showed that large word windows can have a higher discriminative power than indipendent features , but they did it by using a huge training corpus . In our study , joint context-based representations derived from a small corpus such as RCV1 are already showing competitive performances . This result strengthens our belief that dependencies are a possible solution for the data sparsity problem of joint feature spaces . We also believe that verb similarity might not be the best task to show the usefulness of joint contexts for semantic representation . The main goal of the present paper was to show that joint contexts are a viable option to exploit the full potential of distributional information . Our successful tests on verb similarity prove that syntactic joint contexts do not suffer of data sparsity and are also able to beat other types of representations based on independent word features . Moreover , syntactic joint contexts are much simpler and more competitive with respect to window-based ones . The good performance in the verb similarity task motivates us to further test syntactic joint contexts on a larger range of tasks , such as word sense disambiguation , textual entailment and classification of semantic relations , so that they can unleash their full potential . Moreover , our proposal opens interesting perspectives for computational psycholinguistics , especially for modeling those semantic phenomena that are inherently related to the activation of event knowledge ( e.g. thematic fit ) . | Providing richer contexts to Distributional Semantic Models improves by taking word interrelations into account but it would suffer from data sparsity.
They propose a Distributional Semantic Model that incorporates verb contexts as joint syntactic dependencies so that it emulates knowledge about event participants.
They show that representations obtained by the proposed model outperform more complex models on two verb similarity datasets with a limited training corpus. |
D09-1042 | You are an expert at summarizing long articles. Proceed to summarize the following text:
This paper presents an effective method for generating natural language sentences from their underlying meaning representations . The method is built on top of a hybrid tree representation that jointly encodes both the meaning representation as well as the natural language in a tree structure . By using a tree conditional random field on top of the hybrid tree representation , we are able to explicitly model phrase-level dependencies amongst neighboring natural language phrases and meaning representation components in a simple and natural way . We show that the additional dependencies captured by the tree conditional random field allows it to perform better than directly inverting a previously developed hybrid tree semantic parser . Furthermore , we demonstrate that the model performs better than a previous state-of-the-art natural language generation model . Experiments are performed on two benchmark corpora with standard automatic evaluation metrics . One of the ultimate goals in the field of natural language processing ( NLP ) is to enable computers to converse with humans through human languages . To achieve this goal , two important issues need to be studied . First , it is important for computers to capture the meaning of a natural language sentence in a meaning representation . Second , computers should be able to produce a humanunderstandable natural language sentence from its meaning representation . These two tasks are referred to as semantic parsing and natural language generation ( NLG ) , respectively . In this paper , we use corpus-based statistical methods for constructing a natural language generation system . Given a set of pairs , where each pair consists of a natural language ( NL ) sentence and its formal meaning representation ( MR ) , a learning method induces an algorithm that can be used for performing language generation from other previously unseen meaning representations . A crucial question in any natural language processing system is the representation used . Meaning representations can be in the form of a tree structure . In Lu et al . ( 2008 ) , we introduced a hybrid tree framework together with a probabilistic generative model to tackle semantic parsing , where tree structured meaning representations are used . The hybrid tree gives a natural joint tree representation of a natural language sentence and its meaning representation . A joint generative model for natural language and its meaning representation , such as that used in Lu et al . ( 2008 ) has several advantages over various previous approaches designed for semantic parsing . First , unlike most previous approaches , the generative approach models a simultaneous generation process for both NL and MR . One elegant property of such a joint generative model is that it allows the modeling of both semantic parsing and natural language generation within the same process . Second , the generative process proceeds as a recursive top-down Markov process in a way that takes advantage of the tree structure of the MR . The hybrid tree generative model proposed in Lu et al . ( 2008 ) was shown to give stateof-the-art accuracy in semantic parsing on benchmark corpora . While semantic parsing with hybrid trees has been studied in Lu et al . ( 2008 ) , its inverse task -NLG with hybrid trees -has not yet been explored . We believe that the properties that make the hybrid trees effective for semantic parsing also make them effective for NLG . In this paper , we develop systems for the generation task by building on top of the generative model introduced in Lu et al . ( 2008 ) ( referred to as the LNLZ08 system ) . We first present a baseline model by directly " inverting " the LNLZ08 system , where an NL sentence is generated word by word . We call this model the direct inversion model . This model is unable to model some long range global dependencies over the entire NL sentence to be generated . To tackle several weaknesses exhibited by the baseline model , we next introduce an alternative , novel model that performs generation at the phrase level . Motivated by conditional random fields ( CRF ) ( Lafferty et al . , 2001 ) , a different parameterization of the conditional probability of the hybrid tree that enables the model to encode some longer range dependencies amongst phrases and MRs is used . This novel model is referred to as the tree CRF-based model . Evaluation results for both models are presented , through which we demonstrate that the tree CRF-based model performs better than the direct inversion model . We also compare the tree CRFbased model against the previous state-of-the-art model of Wong and Mooney ( 2007 ) . Furthermore , we evaluate our model on a dataset annotated with several natural languages other than English ( Japanese , Spanish , and Turkish ) . Evaluation results show that our proposed tree CRF-based model outperforms the previous model . In this paper , we presented two novel models for the task of generating natural language sentences from given meaning representations , under a hybrid tree framework . We first built a simple direct inversion model as a baseline . Next , to address the limitations associated with the direct inversion model , a tree CRF-based model was introduced . We evaluated both models on standard benchmark corpora . Evaluation results show that the tree CRF-based model performs better than the direct inversion model , and that the tree CRF-based model also outperforms WASP -1 + + , which was a previous state-of-the-art system reported in the literature . | While hybrid trees are shown to be effective for semantic parsing, their application for text generation is under explored.
They propose a phrase-level tree conditional random field that uses a hybrid tree of a meaning representation for the text generation model.
Experiments in four languages with automatic evaluation metrics show that the proposed conditional random field-based model outperforms the previous state-of-the-art system. |
2020.acl-main.573 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations . Some pioneering work has proved that storing a handful of historical relation examples in episodic memory and replaying them in subsequent training is an effective solution for such a challenging problem . However , these memorybased methods usually suffer from overfitting the few memorized examples of old relations , which may gradually cause inevitable confusion among existing relations . Inspired by the mechanism in human long-term memory formation , we introduce episodic memory activation and reconsolidation ( EMAR ) to continual relation learning . Every time neural models are activated to learn both new and memorized data , EMAR utilizes relation prototypes for memory reconsolidation exercise to keep a stable understanding of old relations . The experimental results show that EMAR could get rid of catastrophically forgetting old relations and outperform the state-of-the-art continual learning models . The code and datasets are released on https://github.com / thunlp/ ContinualRE . Relation extraction aims at detecting relations between entities from text , e.g. , extracting the relation " the president of " from the given sentence " Newton served as the president of the Royal Society " , which could serve as external resource for various downstream applications ( Dong et al . , 2015 ; Xiong et al . , 2017 ; Schlichtkrull et al . , 2018 ) . The conventional RE methods ( Riedel et al . , 2013 ; Zeng et al . , 2014 ; Lin et al . , 2016 ) mostly focus on recognizing relations for a fixed pre-defined relation set , and can not handle rapidly emerging novel relations in the real world . Some researchers therefore explore to detect and learn incessantly emerging relations in an open scenario . As shown in Figure 1 , their efforts can be formulated into a two-step pipeline : ( 1 ) Open Relation Learning extracts phrases and arguments to construct patterns of specific relations , and then discovers unseen relation types by clustering patterns , and finally expands sufficient examples of new relation types from large-scale textual corpora ; ( 2 ) Continual Relation Learning continually uses those expanded examples of new relations to train an effective classifier . The classifier is trained on a sequence of tasks for handling both existing and novel relations , where each task has its own relation set . Although continual relation learning is vital for learning emerging relations , there are rare explorations for this field . A straightforward solution is to store all historical data and re-train models every time new relations and examples come in . Nevertheless , it is computationally expensive since relations are in sustainable growth . Moreover , the huge example number of each relation makes frequently mixing new and old examples become infeasible in the real world . Therefore , storing all data is not practical in continual relation learning . In view of this , the recent preliminary work ( Wang et al . , 2019 ) indicates that the main challenge of continual relation learning is the catastrophic forgetting problem , i.e. , it is hard to learn new relations and meanwhile avoid forgetting old relations , considering memorizing all the data is almost impossible . Recent work ( Shin et al . , 2017 ; Kemker and Kanan , 2018 ; Chaudhry et al . , 2019 ) has shown that the memory-based approaches , maintaining episodic memory to save a few training examples in old tasks and re-training memorized examples during training new tasks , are one of the most effective solutions to the catastrophic forgetting problem , especially for continual learning in NLP scenarios ( Wang et al . , 2019 ; d'Autume et al . , 2019 ) . However , existing memory-based models still suffer from an overfitting problem : when adapting them for continual relation learning , they may frequently change feature distribution of old relations , gradually overfit a few examples in memory , and finally become confused among old relations after long-term training . In fact , these memory-based methods are similar to long-term memory model of mammalian memory in neuroscience ( McClelland et al . , 1995 ; Bontempi et al . , 1999 ) . Although researchers in neuroscience are not clear about secrets inside the human brain , they reach a consensus that the formation of long-term memory relies on continually replaying and consolidating information ( Tononi and Cirelli , 2006 ; Boyce et al . , 2016 ; Yang et al . , 2014 ) , corresponding to the episodic memory and memory replay in continual learning models . Yet later work ( Nader et al . , 2000 ; Lee et al . , 2004 ; Alberini , 2005 ) in neuroscience indicates that reactivation of consolidated memory triggers a reconsolidation stage to continually maintain memory , and memory is easy to be changed or erased in this stage . To apply some reconsolidation exercises can help memory go through this stage and keep long-term memory stable . Intuitively , the ex-isting memory-based models seem like continual memory activation without reconsolidation exercises , and thus become sensitive and volatile . Inspired by the reconsolidation mechanism in human long-term memory formation , we introduce episodic memory activation and reconsolidation ( EMAR ) to continual relation learning in this paper . More specifically , when training models on new relations and their examples , we first adopt memory replay to activate neural models on examples of both new relations and memory , and then utilize a special reconsolidation module to let models avoid excessively changing and erasing feature distribution of old relations . As the core of relation learning is to grasp relation prototypes rather than rote memorization of relation examples , our reconsolidation module requires models to be able to distinguish old relation prototypes after each time memory is replayed and activated . As compared with pioneering explorations to improve episodic memory replay ( Chaudhry et al . , 2019 ; Wang et al . , 2019 ) , with toughly keeping feature distribution of old relations invariant , EMAR is more flexible in feature spaces and powerful in remembering relation prototypes . We conduct sufficient experiments on several RE datasets , and the results show that EMAR effectively alleviates the catastrophic forgetting problem and significantly outperforms the stateof-the-art continual learning models . Further experiments and analyses indicate the reasons for the effectiveness of EMAR , proving that it can utilize a few examples in old tasks to reconsolidate old relation prototypes and keep better distinction among old relations after long-term training . To alleviate catastrophically forgetting old relations in continual relation learning , we introduce episodic memory activation and reconsolidation ( EMAR ) , inspired by the mechanism in human long-term memory formation . Compared with existing memory-based methods , EMAR requires models to understand the prototypes of old relations rather than to overfit a few specific memorized examples , which can keep better distinction among relations after long-term training . We conduct experiments on three benchmarks in relation extraction and carry out extensive experimental results as well as empirical analyses , showing the effectiveness of EMAR on utilizing memorized examples . For future work , how to combine open relation learning and continual relation learning together to complete the pipeline for emerging relations still remains a problem , and we will continue to work on it . | Storing histories of examples is shown to be effective for continual relation learning however existing methods suffer from overfitting to memorize a few old memories.
They propose a human memory mechanism inspired by memory activation and reconsolidation method which aims to keep a stable understanding of old relations.
The proposed method mitigates catastrophic forgetting of old relations and achieves state-of-the-art on several relation extraction datasets showing it can use memorized examples. |
2022.acl-long.304 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Contrastive learning has achieved impressive success in generation tasks to militate the " exposure bias " problem and discriminatively exploit the different quality of references . Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word , while keywords are the gist of the text and dominant the constrained mapping relationships . Hence , in this work , we propose a hierarchical contrastive learning mechanism , which can unify hybrid granularities semantic meaning in the input text . Concretely , we first propose a keyword graph via contrastive correlations of positive-negative pairs to iteratively polish the keyword representations . Then , we construct intra-contrasts within instance-level and keyword-level , where we assume words are sampled nodes from a sentence distribution . Finally , to bridge the gap between independent contrast levels and tackle the common contrast vanishing problem , we propose an inter-contrast mechanism that measures the discrepancy between contrastive keyword nodes respectively to the instance distribution . Experiments demonstrate that our model outperforms competitive baselines on paraphrasing , dialogue generation , and storytelling tasks . Generation tasks such as storytelling , paraphrasing , and dialogue generation aim at learning a certain correlation between text pairs that maps an arbitrary-length input to another arbitrary-length output . Traditional methods are mostly trained with " teacher forcing " and lead to an " exposure bias " problem ( Schmidt , 2019 ) . Incorporating the generation method with contrastive learning achieved impressive performance on tackling such issues , which takes an extra consideration of synthetic negative samples contrastively ( Lee et al . , 2021 Existing contrastive mechanisms are mainly focused on the instance level ( Lee et al . , 2021 ; Cai et al . , 2020 ) . However , word-level information is also of great importance . Take the case shown in the upper part of Figure 1 for example , the keyword covers the gist of the input text and determines the embedding space of the text . The text representation will be significantly affected if adding a slight perturbation on the keyword , i.e. , changing " cosmology " to " astrophysics " . In addition , as shown on the bottom part , under some circumstances , it is too easy for the model to do the classification since the semantic gap between contrastive pairs is huge . Thus , the model fails to distinguish the actual discrepancy , which causes a " contrast vanishing " problem at both instance-level and keyword-level . Based on the above motivation , in this paper , we propose a hierarchical contrastive learning method built on top of the classic CVAE structure . We choose CVAE due to its ability in modeling global properties such as syntactic , semantic , and discourse coherence ( Li et al . , 2015 ; Yu et al . , 2020 ) . We first learn different granularity representations through two independent contrast , i.e. , instancelevel and keyword-level . Specifically , we use the universal and classic TextRank ( Mihalcea and Tarau , 2004 ) method to extract keywords from each text , which contain the most important information and need to be highlighted . On the instancelevel , we treat the keyword in the input text as an additional condition for a better prior semantic distribution . Then , we utilize Kullback-Leibler divergence ( Kullback and Leibler , 1951 ) to reduce the distance between prior distribution and positive posterior distribution , and increase the distance with the negative posterior distribution . While on the keyword-level , we propose a keyword graph via contrastive correlations of positive-negative pairs to learn informative and accurate keyword representations . By treating the keyword in the output text as an anchor , the imposter keyword is produced by neighboring nodes of the anchor keyword and forms the keyword-level contrast , where the similarity between the imposter keyword and the anchor keyword is poorer than the positive keyword . To unify individual intra-contrasts and tackle the " contrast vanishing " problem in independent contrastive granularities , we leverage an inter-contrast , the Mahalanobis contrast , to investigate the contrastive enhancement based on the Mahalanobis distance ( De Maesschalck et al . , 2000 ) , a measure of the distance between a point and a distribution , between the instance distribution and the keyword representation . Concretely , we ensure the distance from the anchor instance distribution to the groundtruth keyword vector is closer than to the imposter keyword vector . The Mahalanobis contrast plays an intermediate role that joins the different granularities contrast via incorporating the distribution of instance with the representation of its crucial part , and makes up a more comprehensive keyworddriven hierarchical contrastive mechanism , so as to ameliorate the generated results . We empirically show that our model outperforms CVAE and other baselines significantly on three generation tasks : paraphrasing , dialogue genera-tion , and storytelling . Our contributions can be summarized as follows : • To our best knowledge , we are the first to propose an inter-level contrastive learning method , which unifies instance-level and keyword-level contrasts in the CVAE framework . • We propose three contrastive learning measurements : KL divergence for semantic distribution , cosine distance for points , and Mahalanobis distance for points with distribution . • We introduce a global keyword graph to obtain polished keyword representations and construct imposter keywords for contrastive learning . In this paper , we propose a hierarchical contrastive learning mechanism , which consists of intra-contrasts within instance-level and keywordlevel and inter-contrast with Mahalanobis contrast . The experimental results yield significant out-performance over baselines when applied in the CVAE framework . In the future , we aim to extend the contrastive learning mechanism to different basic models , and will explore contrastive learning methods based on external knowledge . | Existing works on contrastive learning for text generation focus only on instance-level while word-level information such as keywords is also of great importance.
They propose a CVAE-based hierarchical contrastive learning within instance and keyword-level using a keyword graph which iteratively polishes the keyword representations.
The proposed model outperforms CVAE and baselines on storytelling, paraphrasing, and dialogue generation tasks. |
P10-1139 | You are an expert at summarizing long articles. Proceed to summarize the following text:
There is a growing research interest in opinion retrieval as on-line users ' opinions are becoming more and more popular in business , social networks , etc . Practically speaking , the goal of opinion retrieval is to retrieve documents , which entail opinions or comments , relevant to a target subject specified by the user 's query . A fundamental challenge in opinion retrieval is information representation . Existing research focuses on document-based approaches and documents are represented by bag-of-word . However , due to loss of contextual information , this representation fails to capture the associative information between an opinion and its corresponding target . It can not distinguish different degrees of a sentiment word when associated with different targets . This in turn seriously affects opinion retrieval performance . In this paper , we propose a sentence-based approach based on a new information representation , namely topic-sentiment word pair , to capture intra-sentence contextual information between an opinion and its target . Additionally , we consider inter-sentence information to capture the relationships among the opinions on the same topic . Finally , the two types of information are combined in a unified graph-based model , which can effectively rank the documents . Compared with existing approaches , experimental results on the COAE08 dataset showed that our graph-based model achieved significant improvement . In recent years , there is a growing interest in sharing personal opinions on the Web , such as product reviews , economic analysis , political polls , etc . These opinions can not only help independent users make decisions , but also obtain valuable feedbacks ( Pang et al . , 2008 ) . Opinion oriented research , including sentiment classifica-tion , opinion extraction , opinion question answering , and opinion summarization , etc . are receiving growing attention ( Wilson , et al . , 2005 ; Liu et al . , 2005 ; Oard et al . , 2006 ) . However , most existing works concentrate on analyzing opinions expressed in the documents , and none on how to represent the information needs required to retrieve opinionated documents . In this paper , we focus on opinion retrieval , whose goal is to find a set of documents containing not only the query keyword(s ) but also the relevant opinions . This requirement brings about the challenge on how to represent information needs for effective opinion retrieval . In order to solve the above problem , previous work adopts a 2-stage approach . In the first stage , relevant documents are determined and ranked by a score , i.e. tf-idf value . In the second stage , an opinion score is generated for each relevant document ( Macdonald and Ounis , 2007 ; Oard et al . , 2006 ) . The opinion score can be acquired by either machine learning-based sentiment classifiers , such as SVM ( Zhang and Yu , 2007 ) , or a sentiment lexicons with weighted scores from training documents ( Amati et al . , 2007 ; Hannah et al . , 2007 ; Na et al . , 2009 ) . Finally , an overall score combining the two is computed by using a score function , e.g. linear combination , to re-rank the retrieved documents . Retrieval in the 2-stage approach is based on document and document is represented by bag-of-word . This representation , however , can only ensure that there is at least one opinion in each relevant document , but it can not determine the relevance pairing of individual opinion to its target . In general , by simply representing a document in bag-of-word , contextual information i.e. the corresponding target of an opinion , is neglected . This may result in possible mismatch between an opinion and a target and in turn affects opinion retrieval performance . By the same token , the effect to documents consisting of mul-tiple topics , which is common in blogs and on-line reviews , is also significant . In this setting , even if a document is regarded opinionated , it can not ensure that all opinions in the document are indeed relevant to the target concerned . Therefore , we argue that existing information representation i.e. bag-of-word , can not satisfy the information needs for opinion retrieval . In this paper , we propose to handle opinion retrieval in the granularity of sentence . It is observed that a complete opinion is always expressed in one sentence , and the relevant target of the opinion is mostly the one found in it . Therefore , it is crucial to maintain the associative information between an opinion and its target within a sentence . We define the notion of a topic-sentiment word pair , which is composed of a topic term ( i.e. the target ) and a sentiment word ( i.e. opinion ) of a sentence . Word pairs can maintain intra-sentence contextual information to express the potential relevant opinions . In addition , inter-sentence contextual information is also captured by word pairs to represent the relationship among opinions on the same topic . In practice , the inter-sentence information reflects the degree of a word pair . Finally , we combine both intra-sentence and inter-sentence contextual information to construct a unified undirected graph to achieve effective opinion retrieval . The rest of the paper is organized as follows . In Section 2 , we describe the motivation of our approach . Section 3 presents a novel unified graph-based model for opinion retrieval . We evaluated our model and the results are presented in Section 4 . We review related works on opinion retrieval in Section 5 . Finally , in Section 6 , the paper is concluded and future work is suggested . In this work we focus on the problem of opinion retrieval . Different from existing approaches , which regard document relevance as the key indicator of opinion relevance , we propose to explore the relevance of individual opinion . To do that , opinion retrieval is performed in the granularity of sentence . We define the notion of word pair , which can not only maintain the association between the opinion and the corresponding target in the sentence , but it can also build up the relationship among sentences through the same word pair . Furthermore , we convert the relationships between word pairs and sentences into a unified graph , and use the HITS algorithm to achieve document ranking for opinion retrieval . Finally , we compare our approach with existing methods . Experimental results show that our proposed model performs well on COAE08 dataset . The novelty of our work lies in using word pairs to represent the information needs for opinion retrieval . On the one hand , word pairs can identify the relevant opinion according to intra-sentence contextual information . On the other hand , word pairs can measure the degree of a relevant opinion by taking inter-sentence contextual information into consideration . With the help of word pairs , the information needs for opinion retrieval can be represented appropriately . In the future , more research is required in the following directions : ( 1 ) Since word pairs can indicate relevant opinions effectively , it is worth further study on how they could be applied to other opinion oriented applications , e.g. opinion summarization , opinion prediction , etc . ( 2 ) The characteristics of blogs will be taken into consideration , i.e. , the post time , which could be helpful to create a more time sensitivity graph to filter out fake opinions . ( 3 ) Opinion holder is another important role of an opinion , and the identification of opinion holder is a main task in NTCIR . It would be interesting to study opinion holders , e.g. its seniority , for opinion retrieval . | Existing approaches to the opinion retrieval task represent documents using bag-of-words disregarding contextual information between an opinion and its corresponding text.
They propose a sentence-based approach which captures both inter and intra sentence contextual information combined with a unified undirected graph.
The proposed method outperforms existing approaches on the COAE08 dataset showing that word pairs can represent information for opinion retrieval well. |
D08-1050 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Most state-of-the-art wide-coverage parsers are trained on newspaper text and suffer a loss of accuracy in other domains , making parser adaptation a pressing issue . In this paper we demonstrate that a CCG parser can be adapted to two new domains , biomedical text and questions for a QA system , by using manually-annotated training data at the POS and lexical category levels only . This approach achieves parser accuracy comparable to that on newspaper data without the need for annotated parse trees in the new domain . We find that retraining at the lexical category level yields a larger performance increase for questions than for biomedical text and analyze the two datasets to investigate why different domains might behave differently for parser adaptation . Most state-of-the-art wide-coverage parsers are based on the Penn Treebank ( Marcus et al . , 1993 ) , making such parsers highly tuned to newspaper text . A pressing question facing the parsing community is how to adapt these parsers to other domains , such as biomedical research papers and web pages . A related question is how to improve the performance of these parsers on constructions that are rare in the Penn Treebank , such as questions . Questions are particularly important since a question parser is a component in most Question Answering ( QA ) systems ( Harabagiu et al . , 2001 ) . In this paper we investigate parser adaptation in the context of lexicalized grammars , by using a parser based on Combinatory Categorial Grammar ( CCG ) ( Steedman , 2000 ) . A key property of CCG is that it is lexicalized , meaning that each word in a sentence is associated with an elementary syntactic structure . In the case of CCG this is a lexical category expressing subcategorization information . We exploit this property of CCG by performing manual annotation in the new domain , but only up to this level of representation , where the annotation can be carried out relatively quickly . Since CCG lexical categories are so expressive , many of the syntactic characteristics of a domain are captured at this level . The two domains we consider are the biomedical domain and questions for a QA system . We use the term " domain " somewhat loosely here , since questions are best described as a particular set of syntactic constructions , rather than a set of documents about a particular topic . However , we consider question data to be interesting in the context of domain adaptation for the following reasons : 1 ) there are few examples in the Penn Treebank ( PTB ) and so PTB parsers typically perform poorly on them ; 2 ) questions form a fairly homogeneous set with respect to the syntactic constructions employed , and it is an interesting question how easy it is to adapt a parser to such data ; and 3 ) QA is becoming an important example of NLP technology , and question parsing is an important task for QA systems . The CCG parser we use ( Clark and Curran , 2007b ) makes use of three levels of representation : one , a POS tag level based on the fairly coarse-grained POS tags in the Penn Treebank ; two , a lexical category level based on the more fine-grained CCG lexical categories , which are assigned to words by a CCG su-pertagger ; and three , a hierarchical level consisting of CCG derivations . A key idea in this paper , following a pilot study in Clark et al . ( 2004 ) , is to perform manual annotation only at the first two levels . Since the lexical category level consists of sequences of tags , rather than hierarchical derivations , the annotation can be performed relatively quickly . For the biomedical and question domains we manually annotated approximately 1,000 and 2,000 sentences , respectively , with CCG lexical categories . We also created a gold standard set of grammatical relations ( GR ) in the Stanford format ( de Marneffe et al . , 2006 ) , using 500 of the questions . For the biomedical domain we used the BioInfer corpus ( Pyysalo et al . , 2007a ) , an existing gold-standard GR resource also in the Stanford format . We evaluated the parser on both lexical category assignment and recovery of GRs . The results show that the domain adaptation approach used here is successful in two very different domains , achieving parsing accuracy comparable to state-of-the-art accuracy for newspaper text . The results also show , however , that the two domains have different profiles with regard to the levels of representation used by the parser . We find that simply retraining the POS tagger used by the parser leads to a large improvement in performance for the biomedical domain , and that retraining the CCG supertagger on the annotated biomedical data improves the performance further . For the question data , retraining just the POS tagger also improves parser performance , but retraining the supertagger has a much greater effect . We perform some analysis of the two datasets in order to explain the different behaviours with regard to porting the CCG parser . We have targeted lower levels of representation in order to adapt a lexicalized-grammar parser to two new domains , biomedical text and questions . Although each of the lower levels has been targeted independently in previous work , this is the first study that examines both levels together to determine how they affect parsing accuracy . We achieved an accuracy on grammatical relations in the same range as that of the original parser for newspaper text , without requiring costly annotation of full parse trees . Both biomedical and question data are domains in which there is an immediate need for accurate parsing . The question dataset is in some ways an extreme example for domain adaptation , since the sentences are syntactically uniform ; on the other hand , it is of interest as a set of constructions where the parser initially performed poorly , and is a realistic parsing challenge in the context of QA systems . Interestingly , although an increase in accuracy at each stage of the pipeline did yield an increase at the following stage , these increases were not uniform across the two domains . The new POS tagger model was responsible for most of the improvement in parsing for the biomedical domain , while the new supertagger model was necessary to see a large improvement in the question domain . We attribute this to the fact that question syntax is significantly different from newspaper syntax . We expect these considerations to apply to any lexicalized-grammar parser . Of course , it would be useful to have a way of predicting which level of annotation would be most effective for adapting to a new domain before the annotation begins . The utility of measures such as unknown word rate ( which can be performed with unlabelled data ) and unknown POS n-gram rate ( which can be performed with only POS tags ) is not yet sufficiently clear to rely on them as predictive measures , but it seems a fruitful avenue for future work to investigate the importance of such measures for parser domain adaptation . | Most existing parsers are tuned for newspaper texts making them limited in applicable domains.
They propose a method to adapt a CCG parser to new domains using manually-annotated data only at POS and lexical category levels.
The proposed method achieves comparable results to in-domain parsers without expensive full annotations on biomedical texts and questions that are rare in existing benchmark datasets. |
D15-1028 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Research on modeling time series text corpora has typically focused on predicting what text will come next , but less well studied is predicting when the next text event will occur . In this paper we address the latter case , framed as modeling continuous inter-arrival times under a log-Gaussian Cox process , a form of inhomogeneous Poisson process which captures the varying rate at which the tweets arrive over time . In an application to rumour modeling of tweets surrounding the 2014 Ferguson riots , we show how interarrival times between tweets can be accurately predicted , and that incorporating textual features further improves predictions . Twitter is a popular micro-blogging service which provides real-time information on events happening across the world . Evolution of events over time can be monitored there with applications to disaster management , journalism etc . For example , Twitter has been used to detect the occurrence of earthquakes in Japan through user posts ( Sakaki et al . , 2010 ) . Modeling the temporal dynamics of tweets provides useful information about the evolution of events . Inter-arrival time prediction is a type of such modeling and has application in many settings featuring continuous time streaming text corpora , including journalism for event monitoring , real-time disaster monitoring and advertising on social media . For example , journalists track several rumours related to an event . Predicted arrival times of tweets can be applied for ranking rumours according to their activity and narrow the interest to investigate a rumour with a short interarrival time over that of a longer one . Modeling the inter-arrival time of tweets is a challenging task due to complex temporal patterns exhibited . Tweets associated with an event stream arrive at different rates at different points in time . For example , Figure 1a shows the arrival times ( denoted by black crosses ) of tweets associated with an example rumour around Ferguson riots in 2014 . Notice the existence of regions of both high and low density of arrival times over a one hour interval . We propose to address inter-arrival time prediction problem with log-Gaussian Cox process ( LGCP ) , an inhomogeneous Poisson process ( IPP ) which models tweets to be generated by an underlying intensity function which varies across time . Moreover , it assumes a non-parametric form for the intensity function allowing the model complexity to depend on the data set . We also provide an approach to consider textual content of tweets to model inter-arrival times . We evaluate the models using Twitter rumours from the 2014 Ferguson unrest , and demonstrate that they provide good predictions for inter-arrival times , beating the baselines e.g. homogeneous Poisson Process , Gaussian Process regression and univariate Hawkes Process . Even though the central application is rumours , one could apply the proposed approaches to model the arrival times of tweets corresponding to other types of memes , e.g. discussions about politics . This paper makes the following contributions : 1 . Introduces log-Gaussian Cox process to predict tweet arrival times . 2 . Demonstrates how incorporating text improves results of inter-arrival time prediction . This paper introduced the log-Gaussian Cox processes for the problem of predicting the interarrival times of tweets . We showed how text from posts helps to achieve significant improvements . Evaluation on a set of rumours from Ferguson riots showed efficacy of our methods comparing to baselines . The proposed approaches are generalizable to problems other than rumours , e.g. disaster management and advertisement campaigns . | Modeling the event inter-arrival time of tweets is challenging due to complex temporal patterns but few works aim to predict the next text event occurrence.
They propose to apply a log-Gaussian Cox process model which captures the varying arriving rate over time coupled with the textual contents of tweets.
The proposed model outperforms baseline models on an inter-arrival time prediction task around a riots rumour and shows that it improves with textual features. |
P10-1072 | You are an expert at summarizing long articles. Proceed to summarize the following text:
We present a game-theoretic model of bargaining over a metaphor in the context of political communication , find its equilibrium , and use it to rationalize observed linguistic behavior . We argue that game theory is well suited for modeling discourse as a dynamic resulting from a number of conflicting pressures , and suggest applications of interest to computational linguists . A 13 Dec 1992 article in The Times starts thus : The European train chugged out of the station last night ; for most of the day it looked as if it might be stalled there for some time . It managed to pull away at around 10:30 pm only after the Spanish prime minister , Felipe Gonzalez , forced the passengers in the first class carriages into a last minute whip round to sweeten the trip for the European Community 's poor four : Spain , Portugal , Greece and Ireland . The fat controller , Helmut Kohl , beamed with satisfaction as the deal was done . The elegantlysuited Francois Mitterrand was equally satisfied . But nobody was as pleased as John Major , stationmaster for the UK presidency , for whom the agreement marked a scarce high point in a battered premiership . The departure had actually been delayed by seven months by Danes on the line . Just when that problem was solved , there was the voluble outbreak , orchestrated by Spain , from the poor four passengers demanding that they should travel free and be given spending money , too . The coupling of the carriages may not be reliably secure but the pan-European express is in motion . That few seem to agree the destination suggests that future arguments are inevitable at every set of points . Next stop : Copenhagen . Apart from an entertaining read , the extended metaphor provides an elaborate conceptual correspondence between a familiar domain of train journeys and the unfolding process of European integration . Carriages are likened to nation states ; passengers to their peoples ; treaties to stations ; politicians to responsible rail company employees . In a compact form , the metaphor gives expression to both the small and the large scale of the process . It provides for the recent history : Denmark 's failure to ratify the 1992 Maastricht treaty until opt-outs were negotiated later that year is compared to dissenters sabotaging the journey by laying on the tracks ( Danes on the line ) ; negotiations over the Cohesion Fund that would provide less developed regions with financial aid to help them comply with convergence criteria are likened to second class carriages with poor passengers for whom the journey had to be subsidized . At a more general level , the European integration is a purposeful movement towards some destination according to a worked out plan , getting safely through negotiation and implementation from one treaty to another , as a train moving on its rails through subsequent stations , with each nation being separate yet tied with everyone else . Numerous inferences regarding speed , timetables , stations , passengers , different classes of tickets , temporary obstacles on the tracks , and so on can be made by the reader based on the knowledge of train journeys , giving him or her a feeling of an enhanced understanding1 of the highly complex process of European integration . So apt was the metaphor that political fights were waged over its details ( Musolff , 2000 ) . Worries about destination were given an eloquent expression by Margaret Thatcher ( Sunday Times , 20 Sept 1992 ): She warned EC leaders to stop their endless round of summits and take notice of their own people . " There is a fear that the European train will thunder forward , laden with its customary cargo of gravy , towards a destination neither wished for nor understood by electorates . But the train can be stopped , " she said . The metaphor proved flexible enough for further elaboration . John Major , a Conservative PM of Britain , spoke on June 1st , 1994 about his vision of the decision making at the EU level , saying that he had never believed that Europe must act as one on every issue , and advocating " a sensible new approach , varying when it needs to , multitrack , multi-speed , multi-layered . " He attempted to turn a largely negative Conservative take on the European train ( see Thatcher above ) into a tenable positive vision -each nation-carriage is now presumably a rather autonomous entity , waiting on a side track for the right locomotive , in a huge yet smoothly operating railroad system . Major 's political opponents offered their counter-frames . In both cases , the imagery of a large transportation system was taken up , yet turned around to suggest that " multi , for everyone " amounts to Britain being in " the slow lane , " and a different image was suggested that makes the negative evaluation of Britain 's opt-outs more poignant -a football metaphor , where relegation to the second division is a sign of a weak performance , and a school metaphor , where Britain is portrayed as an under-achiever : John Cunningham , Labour He has admitted that his Government would let Britain fall behind in Europe . He is apparently willing to offer voluntary relegation to the second division in Europe , and he is n't even prepared to put up a fight . I believe that in any two-speed Europe , Britain must be up with those in the fast lane . Clearly Mr Major does not . Paddy Ashdown , Liberal Democrat Are you really saying that the best that Britain can hope for under your leadership is ... the slow lane of a two-speed Europe ? Most people in this country will want to aim higher , and will reject your view of a ' drop-out ' Britain . The pro-European camp rallied around the " Britain in the slow lane " version as a critical stance towards the government 's European policy . Of the alternative metaphors , the school metaphor has some traction in the Euro discourse , where the European ( mainly German ) financial officers are compared to school authorities , and governments struggling to meet the strict convergence criteria to enter the Euro are compared to pupils that barely make the grade with Britain as a ' drop-out ' who gave up even trying ( Musolff , 2000 ) . The fact that European policy is being communicated and negotiated via a metaphor is not surprising ; after all , " there is always someone willing to help us think by providing us with a metaphor that accords with HIS views . " 2 From the point of view of the dynamics of political discourse , the puzzle is rather the apparent tendency of politicians to be compelled by the rival 's metaphorical framework . Thatcher tries to turn the train metaphor used by the pro-EU camp around . Yet , assuming metaphors are matters of choice , why should Thatcher feel constrained by her rival 's choice , why does n't she ignore it and merely suggest a new metaphor of her own design ? As the evidence above suggests , this is not Thatcher 's idiosyncrasy , as Major and his rivals acted similarly . Can this dynamic be explained ? In this article , we use the explanatory framework of game theory , seeking to rationalize the observed behavior by designing a game that would produce , at equilibrium , the observed dynamics . Specifically , we formalize the notion that the price of " locking " the public into a metaphorical frame of reference is that a politician is coerced into staying within the metaphor as well , even if he or she is at the receiving end of a rival 's rhetorical move . Since the use of game theory is not common in computational linguistics , we first explain its main attributes , justify our decision to make use of it , and draw connections to research questions that can benefit from its application ( section 2 ) . Next , we design the game of bargaining over a metaphor , and find its equilibrium ( section 3 ) , followed by a discussion ( section 4 ) . This article addressed a specific communicative setting ( rival politicians trying to " sell " to the public their versions of the unfolding realities and necessary policies ) and a specific linguistic tool ( an extended metaphor ) , showing that the particular use made of metaphor in such setting can be rationalized based on the characteristics of the setting . Various questions now arise . Given the central role played by the public gratification constraint in our model , would conversational situations without the need to persuade the public , such as meetings of small groups of peers or phone conversations between friends , tend less to the use of extended metaphor ? Conversely , does the use of extended metaphor in other settings testify to the existence of presumed onlookers who need to be " captured " in a particular version of reality -as in pedagogic or poetic context ? Considerations of the participants ' agendas and their impact on the ensuing dynamics of the exchange would we believe lead to further interest in game theoretic models when addressing complex social dynamics in situations like collaborative authoring , debates , or dating , and will augment the existing mostly statistical approaches with a broader picture of the relevant communication . | Metaphors used in political arguments provide elaborate conceptual correspondences with a tendency of politicians to be compelled by the rival's metaphorical framework to be explained.
They propose a game-theoric model of bargaining over a metaphor which is suitable to model its dynamics and use to rationalize observed linguistic behavior.
They show that the proposed framework can rationalize political communications with the use of extended metaphors based on the characteristics of the setting. |
2021.acl-long.67 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Bilingual lexicons map words in one language to their translations in another , and are typically induced by learning linear projections to align monolingual word embedding spaces . In this paper , we show it is possible to produce much higher quality lexicons with methods that combine ( 1 ) unsupervised bitext mining and ( 2 ) unsupervised word alignment . Directly applying a pipeline that uses recent algorithms for both subproblems significantly improves induced lexicon quality and further gains are possible by learning to filter the resulting lexical entries , with both unsupervised and semisupervised schemes . Our final model outperforms the state of the art on the BUCC 2020 shared task by 14 F 1 points averaged over 12 language pairs , while also providing a more interpretable approach that allows for rich reasoning of word meaning in context . Further analysis of our output and the standard reference lexicons suggests they are of comparable quality , and new benchmarks may be needed to measure further progress on this task . 1 Bilingual lexicons map words in one language to their translations in another , and can be automatically induced by learning linear projections to align monolingual word embedding spaces ( Artetxe et al . , 2016 ; Smith et al . , 2017 ; Lample et al . , 2018 , inter alia ) . Although very successful in practice , the linear nature of these methods encodes unrealistic simplifying assumptions ( e.g. all translations of a word have similar embeddings ) . In this paper , we show it is possible to produce much higher quality lexicons without these restrictions by introducing new methods that combine ( 1 ) unsupervised bitext mining and ( 2 ) unsupervised word alignment . We show that simply pipelining recent algorithms for unsupervised bitext mining ( Tran et al . , 2020 ) and unsupervised word alignment ( Sabet et al . , 2020 ) significantly improves bilingual lexicon induction ( BLI ) quality , and that further gains are possible by learning to filter the resulting lexical entries . Improving on a recent method for doing BLI via unsupervised machine translation ( Artetxe et al . , 2019 ) , we show that unsupervised mining produces better bitext for lexicon induction than translation , especially for less frequent words . These core contributions are established by systematic experiments in the class of bitext construction and alignment methods ( Figure 1 ) . Our full induction algorithm filters the lexicon found via the initial unsupervised pipeline . The filtering can be either fully unsupervised or weakly-supervised : for the former , we filter using simple heuristics and global statistics ; for the latter , we train a multi-layer perceptron ( MLP ) to predict the probability of a word pair being in the lexicon , where the features are global statistics of word alignments . In addition to BLI , our method can also be directly adapted to improve word alignment and reach competitive or better alignment accuracy than the state of the art on all investigated language pairs . We find that improved alignment in sentence representations ( Tran et al . , 2020 ) leads to better contextual word alignments using local similarity ( Sabet et al . , 2020 ) . Our final BLI approach outperforms the previous state of the art on the BUCC 2020 shared task ( Rapp et al . , 2020 ) by 14 F 1 points averaged over 12 language pairs . Manual analysis shows that most of our false positives are due to the incompleteness of the reference and that our lexicon is comparable to the reference lexicon and the output of a supervised system . Because both of our key building blocks make use of the pretrainined contextual representations from mBART ( Liu et al . , Word Alignment Statistical Feature Extraction cooccurrence(good , guten ) = 2 one-to-one align(good , guten ) = 2 many-to-one align(good , guten ) = 0 cosine_similarity(good , guten ) = 0.8 inner_product(good , guten ) = 1.8 count(good ) = 2 count(guten ) = 2 We present a direct and effective framework for BLI with unsupervised bitext mining and word alignment , which sets a new state of the art on the task . From the perspective of pretrained multilingual models ( Conneau et al . , 2019 ; Liu et al . , 2020 ; Tran et al . , 2020 , inter alia ) , our work shows that they have successfully captured information about word translation that can be extracted using similarity based alignment and refinement . Although BLI is only about word types , it strongly benefits from contextualized reasoning at the token level . umich.edu/ ˜mihalcea / wpt ( en-fr and ro-en ) ; https : //web.eecs.umich.edu/ ˜mihalcea / wpt05 ( enhi ) | Existing methods to induce bilingual lexicons use linear projections to align word embeddings that are based on unrealistic simplifying assumptions.
They propose to use both unsupervised bitext mining and unsupervised word alignment methods to produce higher quality lexicons.
The proposed method achieves the state-of-the-art in the bilingual lexical induction task while keeping the interpretability of their pipeline. |
E17-1110 | You are an expert at summarizing long articles. Proceed to summarize the following text:
The growing demand for structured knowledge has led to great interest in relation extraction , especially in cases with limited supervision . However , existing distance supervision approaches only extract relations expressed in single sentences . In general , cross-sentence relation extraction is under-explored , even in the supervised-learning setting . In this paper , we propose the first approach for applying distant supervision to crosssentence relation extraction . At the core of our approach is a graph representation that can incorporate both standard dependencies and discourse relations , thus providing a unifying way to model relations within and across sentences . We extract features from multiple paths in this graph , increasing accuracy and robustness when confronted with linguistic variation and analysis error . Experiments on an important extraction task for precision medicine show that our approach can learn an accurate cross-sentence extractor , using only a small existing knowledge base and unlabeled text from biomedical research articles . Compared to the existing distant supervision paradigm , our approach extracted twice as many relations at similar precision , thus demonstrating the prevalence of cross-sentence relations and the promise of our approach . The accelerating pace in technological advance and scientific discovery has led to an explosive growth in knowledge . The ensuing information overload creates new urgency in assimilating frag-mented knowledge for integration and reasoning . A salient case in point is precision medicine ( Bahcall , 2015 ) . The cost of sequencing a person 's genome has fallen below $ 10001 , enabling individualized diagnosis and treatment of complex genetic diseases such as cancer . The availability of measurement for 20,000 human genes makes it imperative to integrate all knowledge about them , which grows rapidly and is scattered in millions of articles in PubMed2 . Traditional extraction approaches require annotated examples , which makes it difficult to scale to the explosion of extraction demands . Consequently , there has been increasing interest in indirect supervision ( Banko et al . , 2007 ; Poon and Domingos , 2009 ; Toutanova et al . , 2015 ) , with distant supervision ( Craven et al . , 1998 ; Mintz et al . , 2009 ) emerging as a particularly promising paradigm for augmenting existing knowledge bases from unlabeled text ( Poon et al . , 2015 ; Parikh et al . , 2015 ) . This progress is exciting , but distantsupervision approaches have so far been limited to single sentences , thus missing out on relations crossing the sentence boundary . Consider the following example:"The p56Lck inhibitor Dasatinib was shown to enhance apoptosis induction by dexamethasone in otherwise GC-resistant CLL cells . This finding concurs with the observation by Sade showing that Notch-mediated resistance of a mouse lymphoma cell line could be overcome by inhibiting p56Lck . " Together , the two sentences convey the fact that the drug Dasatinib could overcome resistance conferred by mutations to the Notch gene , which can not be inferred from either sentence alone . The impact of missed opportunities is especially pronounced in the long tail of knowledge . Such information is crucial for integrative reasoning as it includes the newest findings in specialized domains . In this paper , we present DISCREX , the first approach for distant supervision to relation extraction beyond the sentence boundary . The key idea is to adopt a document-level graph representation that augments conventional intra-sentential dependencies with new dependencies introduced for adjacent sentences and discourse relations . It provides a unifying way to derive features for classifying relations between entity pairs . As we augment this graph with new arcs , the number of possible paths between entities grow . We demonstrate that feature extraction along multiple paths leads to more robust extraction , allowing the learner to find structural patterns even when the language varies or the parser makes an error . The cross-sentence scenario presents a new challenge in candidate selection . This motivates our concept of minimal-span candidates in Section 3.2 . Excluding non-minimal candidates substantially improves classification accuracy . There is a long line of research on discourse phenomena , including coreference ( Haghighi and Klein , 2007 ; Poon and Domingos , 2008 ; Rahman and Ng , 2009 ; Raghunathan et al . , 2010 ) , narrative structures ( Chambers and Jurafsky , 2009 ; Cheung et al . , 2013 ) , and rhetorical relations ( Marcu , 2000 ) . For the most part , this work has not been connected to relation extraction . Our proposed extraction framework makes it easy to integrate such discourse relations . Our experiments evaluated the impact of coreference and discourse parsing , a preliminary step toward in-depth integration with discourse research . We conducted experiments on extracting druggene interactions from biomedical literature , an important task for precision medicine . By bootstrapping from a recently curated knowledge base ( KB ) with about 162 known interactions , our DIS-CREX system learned to extract inter-sentence drug-gene interactions at high precision . Crosssentence extraction doubled the yield compared to single-sentence extraction . Overall , by applying distant supervision , we extracted about 64,000 distinct interactions from about one million PubMed Central full-text articles , attaining two orders of magnitude increase compared to the original KB . We present the first approach for applying distant supervision to cross-sentence relation extraction , by adopting a document-level graph representation that incorporates both intra-sentential dependencies and inter-sentential relations such as adjacency and discourse relations . We conducted both automatic and manual evaluation on extracting drug-gene interactions from biomedical literature . With cross-sentence extraction , our DIS-CREX system doubled the yield of unique interactions , while maintaining the same accuracy . Using distant supervision , DISCREX improved the coverage of the Gene Drug Knowledge Database ( GDKD ) by two orders of magnitude , without requiring annotated examples . Future work includes : further exploration of features ; improved integration with coreference and discourse parsing ; combining distant supervision with active learning and crowd sourcing ; evaluate the impact of extractions to precision medicine ; applications to other domains . | Existing distance supervision methods for relation extraction cannot capture relations crossing the sentence boundary which is important in specialized domains with long-tail knowledge.
They propose a method for applying distance supervision to cross-sentence relation extraction by adopting a document-level graph representation that incorporates intra-sentential dependencies and inter-sentential relations.
Experiments on extracting drug-gene interactions from biomedical literature show that the proposed method doubles the performance of single-sentence extraction methods. |
D09-1066 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Distance-based ( windowless ) word assocation measures have only very recently appeared in the NLP literature and their performance compared to existing windowed or frequency-based measures is largely unknown . We conduct a largescale empirical comparison of a variety of distance-based and frequency-based measures for the reproduction of syntagmatic human assocation norms . Overall , our results show an improvement in the predictive power of windowless over windowed measures . This provides support to some of the previously published theoretical advantages and makes windowless approaches a promising avenue to explore further . This study also serves as a first comparison of windowed methods across numerous human association datasets . During this comparison we also introduce some novel variations of window-based measures which perform as well as or better in the human association norm task than established measures . Automatic discovery of semantically associated words has attracted a large amount of attention in the last decades and a host of computational association measures have been proposed to deal with this task ( see Section 2 ) . These measures traditionally rely on the co-ocurrence frequency of two words in a corpus to estimate a relatedness score . There has been a recent emergence of distancebased language modelling techiques in NLP ( Savicki and Hlavacova , 2002 ; Terra and Clarke , 2004 ) in which the number of tokens separating words is the essential quantity . While some of this work has considered distance-based alternatives to conventional association measures ( Hardcastle , 2005 ; Washtell , 2009 ) , there has been no principled empirical evaluation of these measures as predictors of human association . We remedy this by conducting a thorough comparison of a wide variety of frequency-based and distance-based measures as predictors of human association scores as elicited in several different free word association tasks . In this work we focus on first-order association measures as predictors of syntagmatic associations . This is in contrast to second and higher-order measures which are better predictors of paradigmatic associations , or word similarity . The distinction between syntagmatic and paradigmatic relationship types is neither exact nor mutually exclusive , and many paradigmatic relationships can be observed syntagmatically in the text . Roughly in keeping with ( Rapp , 2002 ) , we hereby regard paradigmatic assocations as those based largely on word similarity ( i.e. including those typically classed as synonyms , antonyms , hypernyms , hyponyms etc ) , whereas syntagmatic associations are all those words which strongly invoke one another yet which can not readily be said to be similar . Typically these will have an identifiable semantic or grammatical relationship ( meronym / holonym : stem -flower , verb / object : eat -food etc ) , or may have harder-to-classify topical or idiomatic relationships ( family -Christmas , rock -roll ) . We will show in Section 3.2 that syntagmatic relations by themselves constitute a substantial 25 - 40 % of the strongest human responses to cue words . Although the automatic detection of these assocations in text has received less attention than that of paradigmatic associations , they are nonetheless important in applications such as the resolution of bridging anaphora ( Vieira and Poesio , 2000 ) . 1 Furthermore , first-order associations are often the basis of higher-order vector wordspace models used for predicting paradigmatic relationships : i.e. through the observation of words which share similar sets of syntagmatic associations . Therefore improvements made at the level we are concerned with may reasonably be expected to carry through to applications which hinge on the identification of paradigmatic relationships . After a discussion of previous work in Section 2 , we formulate the exact association measures and parameter settings which we compare in Section 3 , where we also introduce the corpora and human association sets used . Then , by using evaluations similar to those described in ( Baroni et al . , 2008 ) and by Rapp ( 2002 ) , we show that the best distance-based measures correlate better overall with human association scores than do the best window based configurations ( see Section 4 ) , and that they also serve as better predictors of the strongest human associations ( see Section 5 ) . This paper presented the first empirical comparison of window-based and the relatively recently introduced windowless association measures , using their ability to reproduce human association scores as a testbed . We show that the best windowless measures are always at least as good as the best window-based measures , both when it comes to overall correlation with human association scores and predicting the strongest human response . In addition , for several human association sets , they perform significantly better . Although not all parameter settings and corpus sizes could be explored , we conclude that it is worthwhile investigating windowless association measures further . As a side-benefit , we have also introduced new variants of existing frequency-based association measures and shown them to perform as well as or better than their existing counterparts . Although these measures were semi-principled in their construction , a deeper understanding of why they work so well is needed . This may in turn lead to the construction of superior windowless measures . In our own future work , we are especially interested in using higher-order windowless association measures for retrieving paradigmatic relations as well as exploring their use in various NLP applications . | The performance of new windowless word association measures which take the number of tokens separating words into account remains unknown.
They conduct large-scale empirical comparisons of window-based and windowless association measures for the reproduction of syntagmatic human association norms.
The best windowless measures perform on part with best window-based measures on correlation with human association scores. |
2020.emnlp-main.500 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Adversarial attacks for discrete data ( such as texts ) have been proved significantly more challenging than continuous data ( such as images ) since it is difficult to generate adversarial samples with gradient-based methods . Current successful attack methods for texts usually adopt heuristic replacement strategies on the character or word level , which remains challenging to find the optimal solution in the massive space of possible combinations of replacements while preserving semantic consistency and language fluency . In this paper , we propose BERT-Attack , a high-quality and effective method to generate adversarial samples using pre-trained masked language models exemplified by BERT . We turn BERT against its fine-tuned models and other deep neural models in downstream tasks so that we can successfully mislead the target models to predict incorrectly . Our method outperforms state-of-theart attack strategies in both success rate and perturb percentage , while the generated adversarial samples are fluent and semantically preserved . Also , the cost of calculation is low , thus possible for large-scale generations . The code is available at https://github.com/ LinyangLee / BERT-Attack . Despite the success of deep learning , recent works have found that these neural networks are vulnerable to adversarial samples , which are crafted with small perturbations to the original inputs ( Goodfellow et al . , 2014 ; Kurakin et al . , 2016 ; Chakraborty et al . , 2018 ) . That is , these adversarial samples are imperceptible to human judges while they can mislead the neural networks to incorrect predictions . Therefore , it is essential to explore these adversarial attack methods since the ultimate goal is to make sure the neural networks are highly reliable and robust . While in computer vision fields , both attack strategies and their defense countermeasures are well-explored ( Chakraborty et al . , 2018 ) , the adversarial attack for text is still challenging due to the discrete nature of languages . Generating of adversarial samples for texts needs to possess such qualities : ( 1 ) imperceptible to human judges yet misleading to neural models ; ( 2 ) fluent in grammar and semantically consistent with original inputs . Previous methods craft adversarial samples mainly based on specific rules ( Li et al . , 2018 ; Gao et al . , 2018 ; Yang et al . , 2018 ; Alzantot et al . , 2018 ; Ren et al . , 2019 ; Jin et al . , 2019 ; Zang et al . , 2020 ) . Therefore , these methods are difficult to guarantee the fluency and semantically preservation in the generated adversarial samples at the same time . Plus , these manual craft methods are rather complicated . They use multiple linguistic constraints like NER tagging or POS tagging . Introducing contextualized language models to serve as an automatic perturbation generator could make these rules designing much easier . The recent rise of pre-trained language models , such as BERT ( Devlin et al . , 2018 ) , push the performances of NLP tasks to a new level . On the one hand , the powerful ability of a fine-tuned BERT on downstream tasks makes it more challenging to be adversarial attacked ( Jin et al . , 2019 ) . On the other hand , BERT is a pre-trained masked language model on extremely large-scale unsupervised data and has learned general-purpose language knowledge . Therefore , BERT has the potential to generate more fluent and semantic-consistent substitutions for an input text . Naturally , both the properties of BERT motivate us to explore the possibility of attacking a fine-tuned BERT with another BERT as the attacker . In this paper , we propose an effective and high-quality adversarial sample generation method : BERT-Attack , using BERT as a language model to generate adversarial samples . The core algorithm of BERT-Attack is straightforward and consists of two stages : finding the vulnerable words in one given input sequence for the target model ; then applying BERT in a semantic-preserving way to generate substitutes for the vulnerable words . With the ability of BERT , the perturbations are generated considering the context around . Therefore , the perturbations are fluent and reasonable . We use the masked language model as a perturbation generator and find perturbations that maximize the risk of making wrong predictions ( Goodfellow et al . , 2014 ) . Differently from previous attacking strategies that require traditional single-direction language models as a constraint , we only need to inference the language model once as a perturbation generator rather than repeatedly using language models to score the generated adversarial samples in a trial and error process . Experimental results show that the proposed BERT-Attack method successfully fooled its finetuned downstream model with the highest attack success rate compared with previous methods . Meanwhile , the perturb percentage and the query number are considerably lower , while the semantic preservation is high . To summarize our main contributions : • We propose a simple and effective method , named BERT-Attack , to effectively generate fluent and semantically-preserved adversarial samples that can successfully mislead stateof-the-art models in NLP , such as fine-tuned BERT for various downstream tasks . • BERT-Attack has a higher attacking success rate and a lower perturb percentage with fewer access numbers to the target model compared with previous attacking algorithms , while does not require extra scoring models therefore extremely effective . In this work , we propose a high-quality and effective method BERT-Attack to generate adversarial samples using BERT masked language model . Experiment results show that the proposed method achieves a high success rate while maintaining a minimum perturbation . Nevertheless , candidates generated from the masked language model can sometimes be antonyms or irrelevant to the original words , causing a semantic loss . Thus , enhancing language models to generate more semantically related perturbations can be one possible solution to perfect BERT-Attack in the future . | Generating adversarial samples with gradient-based methods for text data is because of its discrete nature and existing complicated heuristic-based methods suffer from finding optimal solutions.
They propose to use BERT to generate adversarial samples by first finding the valuable words and generating substitutes for these words in a semantic-preserving way.
The proposed method outperforms state-of-the-art methods in success rate and perturb percentage while preserving fluency and sematic of generated samples with low cost. |
N15-1159 | You are an expert at summarizing long articles. Proceed to summarize the following text:
This paper describes a simple and principled approach to automatically construct sentiment lexicons using distant supervision . We induce the sentiment association scores for the lexicon items from a model trained on a weakly supervised corpora . Our empirical findings show that features extracted from such a machine-learned lexicon outperform models using manual or other automatically constructed sentiment lexicons . Finally , our system achieves the state-of-the-art in Twitter Sentiment Analysis tasks from Semeval-2013 and ranks 2nd best in Semeval-2014 according to the average rank . One of the early and rather successful models for sentiment analysis ( Pang and Lee , 2004 ; Pang and Lee , 2008 ) relied on manually constructed lexicons that map words to their sentiment , e.g. , positive , negative or neutral . The document-level polarity is then assigned by performing some form of averaging , e.g. , majority voting , of individual word polarities found in the document . These systems show an acceptable level of accuracy , they are easy to build and are highly computationally efficient as the only operation required to assign a polarity label are the word lookups and averaging . However , the information about word polarities in a document are best exploited when using machine learning models to train a sentiment classifier . In fact , most successful sentiment classification systems rely on supervised learning . Interestingly , a simple bag of words model using just unigrams and bigrams with an SVM has shown excellent results ( Wang and Manning , 2012 ) performing on par or beating more complicated models , e.g. , using neural networks ( Socher et al . , 2011 ) . Regarding Twitter sentiment analysis , the top performing system ( Mohammad et al . , 2013 ) from Semeval-2013 Twittter Sentiment Analysis task ( Nakov et al . , 2013 ) follows this recipe by training an SVM on various surface form , sentiment and semantic features . Perhaps , the most valuable finding is that sentiment lexicons appear to be the most useful source of features accounting for over 8 point gains in the F-measure on top of the standard feature sets . Sentiment lexicons are mappings from words to scores capturing the degree of the sentiment expressed by a given word . While several manually constructed lexicons are made available , e.g. , the MPQA ( Wilson et al . , 2005 ) , the Bing and Liu ( Hu and Liu , 2004 ) and NRC Emoticon ( Mohammad and Turney , 2013 ) lexicons , providing high quality word-sentiment associations compiled by humans , still their main drawback is low recall . For example , the largest NRC Emoticon lexicon contains only 14k items , whereas tweets with extremely sparse surface forms are known to form very large vocabularies . Hence , using larger lexicons with better recall has the potential of learning more accurate models . Extracting such lexicons automatically is a challenging and interesting problem ( Lau et al . , 2011 ; Bro and Ehrig , 2013 ; Liu et al . , 2013 ; Tai and Kao , 2013 ; Yang et al . , 2014 ; Huang et al . , 2014 ) . However , different from previous work our goal is not to extract human-interpretable lexicons but to use them as a source of features to improve the classifier accuracy . Following this idea , the authors in ( Mohammad et al . , 2013 ) use features derived from the lexicons to build a state-of-the-art sentiment classifier for Twitter . They construct automatic lexicons using noisy labels automatically inferred from emoticons and hashtags present in the tweets . The wordsentiment association scores are estimated using pointwise mutual information ( PMI ) computed between a word and a tweet label . While the idea to model statistical correlations between the words and tweet labels using PMI or any other metric is rather intuitive , we believe there is a more effective way to exploit noisy labels for estimating the word-sentiment association scores . Our method relies on the idea of distant supervision ( Marchetti-Bowick and Chambers , 2012 ) . We use a large distantly supervised Twitter corpus , which contains noisy opinion labels ( positive or negative ) to learn a supervised polarity classifier . We encode tweets using words and multi-word expressions as features ( which are also entries in our lexicon ) . The weights from the learned model are then used to define which lexicon items to keep , i.e. , items that constitute a good sentiment lexicon . The scores for the lexicon items can be then directly used to encode new tweets or used to derive more advanced features . Using machine learning to induce the scores for the lexicon items has an advantage of learning the scores that are directly optimized for the classification task , where lexicon items with higher discriminative power tend to receive higher weights . To assess the effectiveness of our approach , we reimplemented the state-of-the-art system ranking 1st in Semeval-2013 Twitter Sentiment Analysis challenge and used it as our baseline . We show that adding features from our machine-learned sentiment lexicon yields better results than any of the automatic PMI lexicons used in the baseline and all of them combined together . Our system obtains new state-of-the-art results on the SemEval-2013 message level task with an F-score of 71.32 -a 2 % of absolute improvement over the previous best system in SemEval-2013 . We also evaluate the utility of the ML lexicon on the five test sets from a recent Semeval-2014 task showing significant improvement over a strong baseline . Finally , our system shows high accuracy among the 42 systems participating in the Semeval-2014 challenge ranking 2nd best according to the average rank across all test sets . We demonstrated a simple and principled approach grounded in machine learning to construct sentiment lexicons . We show that using off-the-shelf machine learning tools to automatically extract lexicons greatly outperforms other automatically constructed lexicons that use pointwise mutual information to estimate sentiment scores for the lexicon items . We have shown that combining our machinelearned lexicon with the previous best system yields state-of-the-art results in Semeval-2013 gaining over 2 points in F-score and ranking our system 2nd according to the average rank over the five test sets of Semeval-2014 . Finally , our ML-based lexicon shows excellent results when added on top of the current state-of-the-art NRC system . While our experimental study is focused on Twitter , our method is general enough to be applied to sentiment classification tasks on other domains . In the future , we plan to experiment with constructing ML lexicons from larger Twitter corpora also using hashtags . Recently , deep convolutional neural networks for sentence modelling ( Kalchbrenner et al . , 2014 ; Kim , 2014 ) have shown promising results on several NLP tasks . In particular , ( Tang et al . , 2014 ) showed that learning sentiment-specific word embeddings and using them as features can boost the accuracy of existing sentiment classifiers . In the future work we plan to explore such approaches . | While sentiment lexicons are useful for building accurate sentiment classification systems, existing methods suffer from low recall or interpretability.
They propose to use Twitter's noisy opinion labels as distant supervision to learn a supervised polarity classifier and use it to obtain sentiment lexicons.
Using the obtained lexicon with an existing model achieves the state-of-the-art on the SemEval-13 message level task and outperforms baseline models in several other datasets. |
N07-1011 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases . In this paper , we propose a machine learning method that enables features over sets of noun phrases , resulting in a first-order probabilistic model for coreference . We outline a set of approximations that make this approach practical , and apply our method to the ACE coreference dataset , achieving a 45 % error reduction over a comparable method that only considers features of pairs of noun phrases . This result demonstrates an example of how a firstorder logic representation can be incorporated into a probabilistic model and scaled efficiently . Noun phrase coreference resolution is the problem of clustering noun phrases into anaphoric sets . A standard machine learning approach is to perform a set of independent binary classifications of the form " Is mention a coreferent with mention b ? " This approach of decomposing the problem into pairwise decisions presents at least two related difficulties . First , it is not clear how best to convert the set of pairwise classifications into a disjoint clustering of noun phrases . The problem stems from the transitivity constraints of coreference : If a and b are coreferent , and b and c are coreferent , then a and c must be coreferent . This problem has recently been addressed by a number of researchers . A simple approach is to perform the transitive closure of the pairwise decisions . However , as shown in recent work ( McCallum and Wellner , 2003 ; Singla and Domingos , 2005 ) , better performance can be obtained by performing relational inference to directly consider the dependence among a set of predictions . For example , McCallum and Wellner ( 2005 ) apply a graph partitioning algorithm on a weighted , undirected graph in which vertices are noun phrases and edges are weighted by the pairwise score between noun phrases . A second and less studied difficulty is that the pairwise decomposition restricts the feature set to evidence about pairs of noun phrases only . This restriction can be detrimental if there exist features of sets of noun phrases that can not be captured by a combination of pairwise features . As a simple example , consider prohibiting coreferent sets that consist only of pronouns . That is , we would like to require that there be at least one antecedent for a set of pronouns . The pairwise decomposition does not make it possible to capture this constraint . In general , we would like to construct arbitrary features over a cluster of noun phrases using the full expressivity of first-order logic . Enabling this sort of flexible representation within a statistical model has been the subject of a long line of research on first-order probabilistic models ( Gaifman , 1964 ; Halpern , 1990 ; Paskin , 2002 ; Poole , 2003 ; Richardson and Domingos , 2006 ) . Conceptually , a first-order probabilistic model can be described quite compactly . A configuration of the world is represented by a set of predi- cates , each of which has an associated real-valued parameter . The likelihood of each configuration of the world is proportional to a combination of these weighted predicates . In practice , however , enumerating all possible configurations , or even all the predicates of one configuration , can result in intractable combinatorial growth ( de Salvo Braz et al . , 2005 ; Culotta and McCallum , 2006 ) . In this paper , we present a practical method to perform training and inference in first-order models of coreference . We empirically validate our approach on the ACE coreference dataset , showing that the first-order features can lead to an 45 % error reduction . We have presented learning and inference procedures for coreference models using first-order features . By relying on sampling methods at training time and approximate inference methods at testing time , this approach can be made scalable . This results in a coreference model that can capture features over sets of noun phrases , rather than simply pairs of noun phrases . This is an example of a model with extremely flexible representational power , but for which exact inference is intractable . The simple approximations we have described here have enabled this more flexible model to outperform a model that is simplified for tractability . A short-term extension would be to consider features over entire clusterings , such as the number of clusters . This could be incorporated in a ranking scheme , as in Ng ( 2005 ) . Future work will extend our approach to a wider variety of tasks . The model we have described here is specific to clustering tasks ; however a similar formulation could be used to approach a number of language processing tasks , such as parsing and relation extraction . These tasks could benefit from first-order features , and the present work can guide the approximations required in those domains . Additionally , we are investigating more sophisticated inference algorithms that will reduce the greediness of the search procedures described here . | Existing approaches treat noun phrase coreference resolution as a set of independent binary classifications limiting the features to be only pairs of noun phrases.
They propose a machine learning method that uses sets of noun phrases as features that are coupled with a sampling method to enable scalability.
Evaluation on the ACE coreference dataset, the proposed method achieves a 45% error reduction over a previous method. |
2021.emnlp-main.765 | You are an expert at summarizing long articles. Proceed to summarize the following text:
The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction ( OpenRE ) . However , high-dimensional vectors can encode complex linguistic information which leads to the problem that the derived clusters can not explicitly align with the relational semantic classes . In this work , we propose a relationoriented clustering model and use it to identify the novel relations in the unlabeled data . Specifically , to enable the model to learn to cluster relational data , our method leverages the readily available labeled data of pre-defined relations to learn a relationoriented representation . We minimize distance between the instance with same relation by gathering the instances towards their corresponding relation centroids to form a cluster structure , so that the learned representation is cluster-friendly . To reduce the clustering bias on predefined classes , we optimize the model by minimizing a joint objective on both labeled and unlabeled data . Experimental results show that our method reduces the error rate by 29.2 % and 15.7 % , on two datasets respectively , compared with current SOTA methods . Relation extraction ( RE ) , a crucial basic task in the field of information extraction , is of the utmost practical interest to various fields including web search ( Xiong et al . , 2017 ) , knowledge base completion ( Bordes et al . , 2013 ) , and question answering ( Yu et al . , 2017 ) . However , conventional RE paradigms such as supervision and distant supervision are generally designed for pre-defined relations , which can not deal with new emerging relations in the real world . Under this background , open relation extraction ( OpenRE ) has been widely studied for its use Figure 1 : Although both instances S 2 and S 3 express founded relation while S 1 expresses CEO relation , the distance between S 1 and S 2 is still smaller than that between S 2 and S 3 . This is because there may be more similar surface information ( e.g. word overlapping ) or syntactic structure between S 1 and S 2 , thus the derived clusters can not explicitly align with relations . in extracting new emerging relational types from open-domain corpora . The approaches used to handle open relations roughly fall into one of two groups . The first group is open information extraction ( OpenIE ) ( Etzioni et al . , 2008 ; Yates et al . , 2007 ; Fader et al . , 2011 ) , which directly extracts related phrases as representations of different relational types . However , if not properly canonicalized , the extracted relational facts can be redundant and ambiguous . The second group is unsupervised relation discovery ( Yao et al . , 2011 ; Shinyama and Sekine , 2006 ; Simon et al . , 2019 ) . In this type of research , much attention has been focused on unsupervised clustering-based RE methods , which cluster and recognize relations from high-dimensional representations ( Elsahar et al . , 2017 ) . Recently , the self-supervised signals in pretrained language model are further exploited for clustering optimization ( Hu et al . , 2020 ) . However , many studies show that highdimensional embeddings can encode complex linguistic information such as morphological ( Peters et al . , 2018 ) , local syntactic ( Hewitt and Manning , 2019 ) , and longer range semantic information ( Jawahar et al . , 2019 ) . Consequently , the distance of representation is not completely consistent with relational semantic similarity . Although Hu et al . ( 2020 ) use self-supervised signals to optimize clustering , there is still no guarantee that the learned clusters will explicitly align with the desired relational semantic classes ( Xing et al . , 2002 ) . As shown in Figure 1 , we use the method proposed by Hu et al . ( 2020 ) to get the instance representations . Although both instances S 2 and S 3 express the founded relation , the euclidean distance between them is larger than that between S 1 and S 2 , which express different relation . Obviously , the clustering algorithm tends to group instances S 1 and S 2 together , rather than S 2 and S 3 which express the same relation . In this work , we propose a relation-oriented clustering method . To enable the model to learn to cluster relational data , pre-defined relations and their existing labeled instances are leveraged to optimize a non-linear mapping , which transforms high-dimensional entity pair representations into relation-oriented representations . Specifically , we minimize distance between the instances with same relation by gathering the instances representation towards their corresponding relation centroids to form the cluster structure , so that the learned representation is cluster-friendly . In order to reduce the clustering bias on the predefined classes , we iteratively train the entity pair representations by optimizing a joint objective function on the labeled and unlabeled subsets of the data , improving both the supervised classification of the labeled data , and the clustering of the unlabeled data . In addition , the proposed method can be easily extended to incremental learning by classifying the pre-defined and novel relations with a unified classifier , which is often desirable in real-world applications . Our experimental results show that our method outperforms current state-of-the-art methods for OpenRE . Our codes are publicly available at Github * . To summarize , the main contributions of our work are as follows : ( 1 ) we propose a novel relation-oriented clustering method RoCORE to enable model to learn to cluster relational data ; ( 2 ) the proposed method achieves the incremental learning of unlabeled novel relations , which is often desirable in real-world applications ; ( 3 ) experimental results show that our method reduces * https://github.com / Ac-Zyx / RoCORE . the error rate by 29.2 % and 15.7 % , on two realworld datasets respectively , compared with current state-of-the-art OpenRE methods . In this work , we introduce a relation-oriented clustering method that extends the current unsupervised clustering-based OpenRE method . The proposed method leverages the labeled data of pre-defined relations to learn a relation-oriented representation from which the derived clusters explicitly align with relational classes . Iterative joint training method effectively reduces the unwanted bias on labeled data . In addition , the proposed method can be easily extended to incremental learning of novel relations . Experimental results show that our method outperforms SOTA methods for OpenRE . | Even though high-dimensional vectors that can encode complex information used for relation extraction are not guaranteed to be consistent with relational semantic similarity.
They propose to use available relation labeled data to obtain relation-oriented representation by minimizing the distance between the same relation instances.
The proposed approach can reduce error rates significantly from the best models for open relation extraction. |
P12-1096 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Long distance word reordering is a major challenge in statistical machine translation research . Previous work has shown using source syntactic trees is an effective way to tackle this problem between two languages with substantial word order difference . In this work , we further extend this line of exploration and propose a novel but simple approach , which utilizes a ranking model based on word order precedence in the target language to reposition nodes in the syntactic parse tree of a source sentence . The ranking model is automatically derived from word aligned parallel data with a syntactic parser for source language based on both lexical and syntactical features . We evaluated our approach on largescale Japanese-English and English-Japanese machine translation tasks , and show that it can significantly outperform the baseline phrasebased SMT system . * This work has been done while the first author was visiting Modeling word reordering between source and target sentences has been a research focus since the emerging of statistical machine translation . In phrase-based models ( Och , 2002 ; Koehn et al . , 2003 ) , phrase is introduced to serve as the fundamental translation element and deal with local reordering , while a distance based distortion model is used to coarsely depict the exponentially decayed word movement probabilities in language translation . Further work in this direction employed lexi-calized distortion models , including both generative ( Koehn et al . , 2005 ) and discriminative ( Zens and Ney , 2006 ; Xiong et al . , 2006 ) variants , to achieve finer-grained estimations , while other work took into account the hierarchical language structures in translation ( Chiang , 2005 ; Galley and Manning , 2008 ) . Long-distance word reordering between language pairs with substantial word order difference , such as Japanese with Subject-Object-Verb ( SOV ) structure and English with Subject-Verb-Object ( SVO ) structure , is generally viewed beyond the scope of the phrase-based systems discussed above , because of either distortion limits or lack of discriminative features for modeling . The most notable solution to this problem is adopting syntax-based SMT models , especially methods making use of source side syntactic parse trees . There are two major categories in this line of research . One is tree-to-string model ( Quirk et al . , 2005 ; Liu et al . , 2006 ) which directly uses source parse trees to derive a large set of translation rules and associated model parameters . The other is called syntax pre-reordering -an approach that re-positions source words to approximate target language word order as much as possible based on the features from source syntactic parse trees . This is usually done in a preprocessing step , and then followed by a standard phrase-based SMT system that takes the re-ordered source sentence as input to finish the translation . In this paper , we continue this line of work and address the problem of word reordering based on source syntactic parse trees for SMT . Similar to most previous work , our approach tries to rearrange the source tree nodes sharing a common parent to mimic the word order in target language . To this end , we propose a simple but effective ranking-based approach to word reordering . The ranking model is automatically derived from the word aligned parallel data , viewing the source tree nodes to be reordered as list items to be ranked . The ranks of tree nodes are determined by their relative positions in the target language -the node in the most front gets the highest rank , while the ending word in the target sentence gets the lowest rank . The ranking model is trained to directly minimize the mis-ordering of tree nodes , which differs from the prior work based on maximum likelihood estimations of reordering patterns ( Li et al . , 2007 ; Genzel , 2010 ) , and does not require any special tweaking in model training . The ranking model can not only be used in a pre-reordering based SMT system , but also be integrated into a phrasebased decoder serving as additional distortion features . We evaluated our approach on large-scale Japanese-English and English-Japanese machine translation tasks , and experimental results show that our approach can bring significant improvements to the baseline phrase-based SMT system in both preordering and integrated decoding settings . In the rest of the paper , we will first formally present our ranking-based word reordering model , then followed by detailed steps of modeling training and integration into a phrase-based SMT system . Experimental results are shown in Section 5 . Section 6 consists of more discussions on related work , and Section 7 concludes the paper . In this paper we present a ranking based reordering method to reorder source language to match the word order of target language given the source side parse tree . Reordering is formulated as a task to rank different nodes in the source side syntax tree according to their relative position in the target language . The ranking model is automatically trained to minimize the mis-ordering of tree nodes in the training data . Large scale experiment shows improvement on both reordering metric and SMT performance , with up to 1.73 point BLEU gain in our evaluation test . In future work , we plan to extend the ranking model to handle reordering between multiple levels of source trees . We also expect to explore better way to integrate ranking reorder model into SMT system instead of a simple penalty scheme . Along the research direction of preprocessing the source language to facilitate translation , we consider to not only change the order of the source language , but also inject syntactic structure of the target language into source language by adding pseudo words into source sentences . | Long distance word reordering remains a challenge for statistical machine translation and existing approaches do it during the preprocessing.
They propose a ranking-based reordering approach where the ranking model is automatically derived from the word aligned parallel data using a syntax parser.
Large scale evaluation of Japanese-English and English-Japanese shows that the proposed approach significantly outperforms the baseline phrase-based statistical machine translation system. |
D09-1072 | You are an expert at summarizing long articles. Proceed to summarize the following text:
We propose a new model for unsupervised POS tagging based on linguistic distinctions between open and closed-class items . Exploiting notions from current linguistic theory , the system uses far less information than previous systems , far simpler computational methods , and far sparser descriptions in learning contexts . By applying simple language acquisition techniques based on counting , the system is given the closed-class lexicon , acquires a large open-class lexicon and then acquires disambiguation rules for both . This system achieves a 20 % error reduction for POS tagging over state-of-the-art unsupervised systems tested under the same conditions , and achieves comparable accuracy when trained with much less prior information . All recent research on unsupervised tagging , as well as the majority of work on supervised taggers , views POS tagging as a sequential labeling problem and treats all POS tags , both closed-and open-class , as roughly equivalent . In this work we explore a different understanding of the tagging problem , viewing it as a process of first identifying functional syntactic contexts , which are flagged by closed-class items , and then using these functional contexts to determine the POS labels . This disambiguation model differs from most previous work in three ways : 1 ) it uses different encodings over two distinct domains ( roughly open-and closed-class words ) with complementary distribution ( and so decodes separately ) ; 2 ) it is deterministic and 3 ) it is non-lexicalized . By learning disambiguation models for open-and closed-classes separately , we found that the deterministic , rulebased model can be learned from unannotated data by a simple strategy of selecting a rule in each appropriate context with the highest count . In contrast to this , most previous work on unsupervised tagging ( especially for English ) concentrates on improving the parameter estimation techniques for training statistical disambiguation models from unannotated data . For example , ( Smith&Eisner , 2005 ) proposes contrastive estimation ( CE ) for log-linear models ( CRF ) , achieving the current state-of-the-art performance of 90.4 % ; ( Goldwater&Griffiths , 2007 ) applies a Bayesian approach to improve maximumlikelihood estimation ( MLE ) for training generative models ( HMM ) . In the main experiments of both of these papers , the disambiguation model is learned , but the algorithms assume a complete knowledge of the lexicon with all possible tags for each word . In this work , we propose making such a large lexicon unnecessary by learning the bulk of the lexicon along with learning a disambiguation model . Little previous work has been done on this natural and simple idea because the clusters found by previous induction schemes are not in line with the lexical categories that we care about . ( Chan , 2008 ) is perhaps the first with the intention of generating " a discrete set of clusters . " By applying similar techniques to ( Chan , 2008 ) , which we discuss later , we can generate clusters that closely approximate the central open-class lexical categories , a major advance , but we still require a closed-class lexicon specifying possible tags for these words . This asymmetry in our lexicon acquisition model conforms with our understanding of natural language as structured data over two distinct domains with complementary distribution : open-class ( lexical ) and closed-class ( functional ) . Provided with only a closed-class lexicon of 288 words , about 0.6 % of the full lexicon , the system acquires a large open-class lexicon and then acquires disambiguation rules for both closed-and open-class words , achieving a tagging accuracy of 90.6 % for a 24k dataset , as high as the current state-of-the-art ( 90.4 % ) achieved with a complete dictionary . In the test condition where both algorithms are provided with a full lexicon , and are trained and evaluated over the same 96k dataset , we reduce the tagging error by up to 20 % . In Section 2 we explain our understanding of the POS tagging problem in detail and define the notions of functional context and open-and closedclass elements . Then we will introduce our methods for acquiring the lexicon ( Section 3 ) and learning disambiguation models ( Section 4 , 5 and 6 ) step by step . Results are reported in Section 7 followed by Section 8 which discusses the linguistic motivation behind this work and the simplicity and efficiency of our model . In this work on unsupervised tagging , we combine lexicon acquisition with the learning of a POS disambiguation model . Moreover , the disambiguation model we used is deterministic , nonlexicalized and defined over two distinct domains with complementary distribution ( open-and closed-class ) . Building a lexicon based on induced clusters requires our morphological knowledge of three special endings in English : -ing , -ed and -s ; on the other hand , to reduce the feature space used for category induction , we utilize vectors of functional features only , exploiting our knowledge of the role of determiners and modal verbs . However , the above information is restricted to the lexicon acquisition model . Taking a lexicon as input , which either consists of a known closed-class lexicon together with an acquired open-class lexicon or is composed by automatic extraction from the Penn Treebank , we need NO language-specific knowledge for learning the disambiguation model . We would like to point the reader to ( Chan , 2008 ) for more discussion on Category induction14 ; and discussions below will concentrate on the proposed disambiguation model . Current Chomskian theory , developed in the Minimalist Program ( MP ) ( Chomsky , 2006 ) , argues ( very roughly speaking ) that the syntactic structure of a sentence is built around a scaffolding provided by a set of functional elements15 . Each of these provides a large tree fragment ( roughly corresponding to what Chomsky calls a phase ) that provide the piece parts for full utterances . Chomsky observes that when these fragments combine , only the very edge of the fragments can change and that the internal structure of these fragments is rigid ( he labels this observation the Phase Impenetrability Condition , PIC ) . With the belief in PIC , we propose the concept of functional context , in which category property can be determined ; also we notice the distinct distribution of the elements ( functional ) on the edge of phase and those ( lexical ) assembled within the phase . Instead of chasing the highest possible performance by using the strongest method possible , we wanted to explore how well a deterministic , nonlexicalized model , following certain linguistic intuitions , can approach the NLP problem . For the unsupervised tagging task , this simple model , with less than two hundred rules learned , even outperforms non-deterministic generative models with ten of thousands of parameters . Another motivation for our pursuit of this deterministic , non-lexicalized model is computational efficiency 16 . It takes less than 3 minutes total for our model to acquire the lexicon , learn the disambiguation model , tag raw data and evaluate the output for a 96k dataset on a small laptop17 . And a model using only counting and selecting is common in the research field of language acquisition and perhaps more compatible to the way humans process language . We are certainly aware that our work does not yet address two problems : 1 ) . How the system can be adapted to work for other languages and 2 ) How to automatically obtain the knowledge of functional elements . We believe that , given the proper understanding of functional elements , our system will be easily adapted to other languages , but we clearly need to test this hypothesis . Also , we are highly interested in completing our system by incorporating the acquisition of functional elements . ( Chan , 2008 ) presents an extensive discussion of his work on morphological induction and ( Mintz et al . , 2002 ) presents interesting psychological experiments we can build on to acquire closed-class words . | Current approaches tackle unsupervised POS tagging as a sequential labelling problem and require a complete knowledge of the lexicon.
They propose to first identify functional syntactic contexts and then use them to make predictions for POS tagging.
The proposed method achieves equivalent performance by using 0.6% of the lexical knowledge used in baseline models. |
2020.emnlp-main.505 | You are an expert at summarizing long articles. Proceed to summarize the following text:
News headline generation aims to produce a short sentence to attract readers to read the news . One news article often contains multiple keyphrases that are of interest to different users , which can naturally have multiple reasonable headlines . However , most existing methods focus on the single headline generation . In this paper , we propose generating multiple headlines with keyphrases of user interests , whose main idea is to generate multiple keyphrases of interest to users for the news first , and then generate multiple keyphrase-relevant headlines . We propose a multi-source Transformer decoder , which takes three sources as inputs : ( a ) keyphrase , ( b ) keyphrase-filtered article , and ( c ) original article to generate keyphrase-relevant , highquality , and diverse headlines . Furthermore , we propose a simple and effective method to mine the keyphrases of interest in the news article and build a first large-scale keyphraseaware news headline corpus , which contains over 180 K aligned triples of news article , headline , keyphrase . Extensive experimental comparisons on the real-world dataset show that the proposed method achieves state-of-theart results in terms of quality and diversity 1 . News Headline Generation is an under-explored subtask of text summarization ( See et al . , 2017 ; Gehrmann et al . , 2018 ; Zhong et al . , 2019 ) . Unlike text summaries that contain multiple contextrelated sentences to cover the main ideas of a document , news headlines often contain a single short sentence to encourage users to read the news . Since one news article typically contains multiple keyphrases or topics of interest to different users , it is useful to generate multiple headlines covering different keyphrases for the news article . Multiheadline generation aims to generate multiple independent headlines , which allows us to recommend news with different news headlines based on the interests of users . Besides , multi-headline generation can provide multiple hints for human news editors to assist them in writing news headlines . However , most existing methods ( Takase et al . , 2016 ; Ayana et al . , 2016 ; Murao et al . , 2019 ; Colmenares et al . , 2019 ; Zhang et al . , 2018 ) focus on single-headline generation . The headline generation process is treated as an one-to-one mapping ( the input is an article and the output is a headline ) , which trains and tests the models without any additional guiding information or constraints . We argue that this may lead to two problems . Firstly , since it is reasonable to generate multiple headlines for the news , training to generate the single ground-truth might result in a lack of more detailed guidance . Even worse , a single ground-truth without any constraint or guidance is often not enough to measure the quality of the generated headline for model testing . For example , even if a generated headline is considered reasonable by humans , it can get a low score in ROUGE ( Lin , 2004 ) , because it might focus on the keyphrases or aspects that are not consistent with the ground-truth . In this paper , we incorporate the keyphrase information into the headline generation as additional guidance . Unlike one-to-one mapping employed in previous works , we treat the headline generation process as a two-to-one mapping , where the inputs are news articles and keyphrases , and the output is a headline . We propose a keyphrase-aware news multi-headline generation method , which contains two modules : ( a ) Keyphrase Generation Model , which aims to generate multiple keyphrases of interest to users for the news article . ( b ) Keyphrase-Aware Multi-Headline Generation Model , which takes the news article and a keyphrase as input and generates a keyphrase-relevant news headline . For training models , we build a first large-scale news keyphrase-aware headline corpus that contains 180 K aligned triples of news article , headline , keyphrase . As in years past , a lot of the food trends of the year were based on creating perfectly photogenic dishes . An aesthetically pleasing dish , however , does n't mean it will stand the test of time . In fact , it 's not uncommon for food trends to be all the hype one year and die out the next . From broccoli coffee to " bowl food , " here are 10 food trends that you likely wo n't see in 2019 . ... [ 15 sentences with 307 words are abbreviated from here . ] In 2018 , restaurants all over the US decided it was a good idea to place gold foil on everything from ice cream to chicken wings to pizza resulting in an expensive food trend . For example , the Ainsworth in New York City sells $ 1,000 worth of gold covered chicken wings . It seems everyone can agree that this is a food trend that might soon disappear . In this paper , we demonstrate how to enable news headline generation systems to be aware of keyphrases such that the model can generate diverse news headlines in a controlled manner . We also build a first large-scale keyphrase-aware news headline corpus , which is based on mining the keyphrases of users ' interests in news articles with user queries . Moreover , we propose a keyphraseaware news multi-headline generation model that contains a multi-source Transformer decoder with three variants of attention-based fusing mechanisms . Extensive experiments on the real-world dataset show that our approach can generate highquality , keyphrase-relevant , and diverse news headlines , which outperforms many strong baselines . | Existing news headline generation models only focus on generating one output even though news articles often have multiple points.
They propose a multi-source transformer decoder and train it using a new large-scale keyphrase-aware news headline corpus built from a search engine.
Their model outperforms strong baselines on their new real-world keyphrase-aware headline generation dataset. |
P18-1222 | You are an expert at summarizing long articles. Proceed to summarize the following text:
Hypertext documents , such as web pages and academic papers , are of great importance in delivering information in our daily life . Although being effective on plain documents , conventional text embedding methods suffer from information loss if directly adapted to hyper-documents . In this paper , we propose a general embedding approach for hyper-documents , namely , hyperdoc2vec , along with four criteria characterizing necessary information that hyper-document embedding models should preserve . Systematic comparisons are conducted between hyperdoc2vec and several competitors on two tasks , i.e. , paper classification and citation recommendation , in the academic paper domain . Analyses and experiments both validate the superiority of hyperdoc2vec to other models w.r.t . the four criteria . The ubiquitous World Wide Web has boosted research interests on hypertext documents , e.g. , personal webpages ( Lu and Getoor , 2003 ) , Wikipedia pages ( Gabrilovich and Markovitch , 2007 ) , as well as academic papers ( Sugiyama and Kan , 2010 ) . Unlike independent plain documents , a hypertext document ( hyper-doc for short ) links to another hyper-doc by a hyperlink or citation mark in its textual content . Given this essential distinction , hyperlinks or citations are worth specific modeling in many tasks such as link-based classification ( Lu and Getoor , 2003 ) , web retrieval ( Page et al . , 1999 ) , entity linking ( Cucerzan , 2007 ) , and citation recommendation ( He et al . , 2010 ) . To model hypertext documents , various efforts ( Cohn and Hofmann , 2000 ; Kataria et al . , 2010 ; Perozzi et al . , 2014 ; Zwicklbauer et al . , 2016 ; Wang et al . , 2016 ) have been made to depict networks of hyper-docs as well as their content . Among potential techniques , distributed representation ( Mikolov et al . , 2013 ; Le and Mikolov , 2014 ) tends to be promising since its validity and effectiveness are proven for plain documents on many natural language processing ( NLP ) tasks . Conventional attempts on utilizing embedding techniques in hyper-doc-related tasks generally fall into two types . The first type ( Berger et al . , 2017 ; Zwicklbauer et al . , 2016 ) simply downcasts hyper-docs to plain documents and feeds them into word2vec ( Mikolov et al . , 2013 ) ( w2v for short ) or doc2vec ( Le and Mikolov , 2014 ) ( d2v for short ) . These approaches involve downgrading hyperlinks and inevitably omit certain information in hyper-docs . However , no previous work investigates the information loss , and how it affects the performance of such downcasting-based adaptations . The second type designs sophisticated embedding models to fulfill certain tasks , e.g. , citation recommendation ( Huang et al . , 2015b ) , paper classification ( Wang et al . , 2016 ) , and entity linking ( Yamada et al . , 2016 ) , etc . These models are limited to specific tasks , and it is yet unknown whether embeddings learned for those particular tasks can generalize to others . Based on the above facts , we are interested in two questions : • What information should hyper-doc embedding models preserve , and what nice property should they possess ? • Is there a general approach to learning taskindependent embeddings of hyper-docs ? To answer the two questions , we formalize the hyper-doc embedding task , and propose four criteria , i.e. , content awareness , context awareness , newcomer friendliness , and context intent aware-ness , to assess different models . Then we discuss simple downcasting-based adaptations of existing approaches w.r.t . the above criteria , and demonstrate that none of them satisfy all four . To this end , we propose hyperdoc2vec ( h-d2v for short ) , a general embedding approach for hyperdocs . Different from most existing approaches , h-d2v learns two vectors for each hyper-doc to characterize its roles of citing others and being cited . Owning to this , h-d2v is able to directly model hyperlinks or citations without downgrading them . To evaluate the learned embeddings , we employ two tasks in the academic paper domain1 , i.e. , paper classification and citation recommendation . Experimental results demonstrate the superiority of h-d2v . Comparative studies and controlled experiments also confirm that h-d2v benefits from satisfying the above four criteria . We summarize our contributions as follows : • We propose four criteria to assess different hyper-document embedding models . • We propose hyperdoc2vec , a general embedding approach for hyper-documents . • We systematically conduct comparisons with competing approaches , validating the superiority of h-d2v in terms of the four criteria . We focus on the hyper-doc embedding problem . We propose that hyper-doc embedding algorithms should be content aware , context aware , newcomer friendly , and context intent aware . To meet all four criteria , we propose a general approach , hyperdoc2vec , which assigns two vectors to each hyper-doc and models citations in a straightforward manner . In doing so , the learned embeddings satisfy all criteria , which no existing model is able to . For evaluation , paper classification and citation recommendation are conducted on three academic paper datasets . Results confirm the effectiveness of our approach . Further analyses also demonstrate that possessing the four properties helps h-d2v outperform other models . | Existing text embedding methods do not take structures of hyper-documents into account losing useful properties for downstream tasks.
They propose an embedding method for hyper-documents that learns citation information along with four criteria to assess the properties the models should preserve.
The proposed model satisfies all of the introduced criteria and performs two tasks in the academic domain better than existing models. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 27