id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
eb5ef34dd9c3845cd27c33242d5316_10
In addition, we conduct detailed experiment analysis on AIDA-CoNLL development set which shows our proposed model can reduce 67.03% type errors of the state-of-the-art model <cite>(Ganea and Hofmann 2017)</cite> and more than 90% of the remaining type error cases are due to over estimation of prior and global modeling problem which we leave as the further work.
differences
eb5ef34dd9c3845cd27c33242d5316_11
• We integrate a BERT-based entity similarity into the local model of a SOTA model <cite>(Ganea and Hofmann 2017)</cite> .
uses
eb5ef34dd9c3845cd27c33242d5316_12
Next, we introduce the general formulation of entity linking problem with a focus on the well known <cite>DeepED</cite> model <cite>(Ganea and Hofmann 2017)</cite> .
uses
eb5ef34dd9c3845cd27c33242d5316_13
Local Model Following <cite>Ganea and Hofmann (2017)</cite> , we instantiate the local model as an attention model based on pre-trained word and entity embeddings.
uses
eb5ef34dd9c3845cd27c33242d5316_14
Besides, <cite>Ganea and Hofmann (2017)</cite> combined this context score with the priorp(e|m) (computed by mixing mention-entity hyperlink count statistics from Wikipedia, a large Web corpus and YAGO.
background
eb5ef34dd9c3845cd27c33242d5316_15
Previous work (Yamada et al. 2016 ; <cite>Ganea and Hofmann 2017)</cite> on learning entity representation are mostly extensions of the embedding methods proposed by (Mikolov et al. 2013 ).
background
eb5ef34dd9c3845cd27c33242d5316_16
Previous work (Yamada et al. 2016 ; <cite>Ganea and Hofmann 2017)</cite> on learning entity representation are mostly extensions of the embedding methods proposed by (Mikolov et al. 2013 ). An entity's context is a bag-ofwords representation which mainly captures topic level entity relatedness rather than entity type relatedness. In contrast, we propose a simple method to build entity embeddings directly from pre-trained BERT (Devlin et al. 2019) which can better capture entity type information.
differences
eb5ef34dd9c3845cd27c33242d5316_17
As will be shown in the analysis section, the entity embeddings from BERT better capture entity type information than those from <cite>Ganea and Hofmann (2017)</cite> .
differences
eb5ef34dd9c3845cd27c33242d5316_18
The local context model of <cite>Ganea and Hofmann (2017)</cite> mainly captures the topic level entity relatedness information based on a long range bag-of-words context.
background
eb5ef34dd9c3845cd27c33242d5316_19
Finally, as for the local disambiguation model, we integrate the BERT-based entity similarity Ψ BERT (e, c) with the local context score Ψ long (e, c) (defined in Equation 2) and the priorp(e|m i ) with two fully connected layers of 100 hidden units and ReLU non-linearities following the same feature composition methods as <cite>Ganea and Hofmann (2017)</cite> .
uses
eb5ef34dd9c3845cd27c33242d5316_20
Then we adopt exactly the same global model as <cite>Ganea and Hofmann (2017)</cite> which is already introduced in the Background section.
uses
eb5ef34dd9c3845cd27c33242d5316_21
Specifically, we adopt <cite>loopy belief propagation (LBP)</cite> to estimate the max-marginal probabilityĝ i (e|D) and then combine it with the priorp(e|m i ) using a two-layer neural network to get the final score ρ i (e) for m i .
uses
eb5ef34dd9c3845cd27c33242d5316_22
Following previous work <cite>(Ganea and Hofmann 2017)</cite>, we only consider in-KB mentions.
uses
eb5ef34dd9c3845cd27c33242d5316_23
Besides, our candidate generation strategy follows that of <cite>Ganea and Hofmann (2017)</cite> to make our results comparable.
uses
eb5ef34dd9c3845cd27c33242d5316_24
The main goal of this work is to introduce a BERT-based entity similarity to capture latent entity type information which is supplementary to existing SOTA local context model <cite>(Ganea and Hofmann 2017)</cite> .
motivation
eb5ef34dd9c3845cd27c33242d5316_25
So we evaluate the performance when integrating the BERT-based entity similarity into the local context model of <cite>Ganea and Hofmann (2017)</cite> . We also evaluate our model with or without global modeling method of <cite>Ganea and Hofmann (2017)</cite> .
uses
eb5ef34dd9c3845cd27c33242d5316_26
To verify the contribution of our proposed BERT-based entity embeddings, we also compare with a straightforward baseline which directly replaces the encoder of <cite>Ganea and Hofmann (2017)</cite> utilizing pre-trained BERT. To do so, we introduce a 768 × 300 dimensional matrix W which projects BERT-based context representation c into <cite>Ganea and Hofmann (2017)</cite>'s entity embeddings space when calculating the similarity score.
uses
eb5ef34dd9c3845cd27c33242d5316_27
The resources (word and entity embeddings) used to train the local context model of <cite>Ganea and Hofmann (2017)</cite> are obtained from <cite>DeepED 7</cite> .
uses
eb5ef34dd9c3845cd27c33242d5316_29
Similar to <cite>Ganea and Hofmann (2017)</cite> , all the entity embeddings are fixed during fine-tuning.
similarities
eb5ef34dd9c3845cd27c33242d5316_30
Note that all the hyper-parameters used in the local context and global model of <cite>Ganea and Hofmann (2017)</cite> were set to the same values as theirs for direct comparison purpose.
uses
eb5ef34dd9c3845cd27c33242d5316_31
Our local model achieves a 1.31 improvement in terms of F1 over its corresponding baseline <cite>(Ganea and Hofmann 2017)</cite> , yielding a very competitive local model with an average 90.06 F1 score even surpassing the performance of four local & global models.
differences
eb5ef34dd9c3845cd27c33242d5316_32
Equipped with the global modeling method of <cite>Ganea and Hofmann (2017)</cite> , the performance of our model further increase to 93.54 with an average 1.32 improvement in terms of F1 over <cite>Ganea and Hofmann (2017)</cite> .
differences
eb5ef34dd9c3845cd27c33242d5316_33
The model of Le and Titov (2018) is a multirelational extension of <cite>Ganea and Hofmann (2017)</cite>'s global modeling method while keeps exactly the same local context model.
extends
eb5ef34dd9c3845cd27c33242d5316_34
Moreover, BERT+G&Hs embeddings performs significantly worse than the baseline <cite>(Ganea and Hofmann 2017)</cite> and our proposed BERT-Entity-Sim model.
similarities
eb5ef34dd9c3845cd27c33242d5316_35
Moreover, BERT+G&Hs embeddings performs significantly worse than the baseline <cite>(Ganea and Hofmann 2017)</cite> and our proposed BERT-Entity-Sim model. The reason is that BERT-based context representation space and <cite>Ganea and Hofmanns entity embeddings space</cite> are heterogeneous. <cite>Ganea and Hofmanns entity embeddings</cite> are bootstrapped from word embeddings which mainly capture topic level entity relatedness, while BERT-based context representation is derived from BERT which naturally captures type information.
background
eb5ef34dd9c3845cd27c33242d5316_36
On average, our proposed model (BERT-Entity-Sim) outperforms the local & global version of <cite>Ganea and Hofmann</cite>; Le and Titov (2017; by an average 0.80 and 0.51 on F1.
differences
eb5ef34dd9c3845cd27c33242d5316_39
Table 5 : Performance of two state-of-the-art fine grained entity typing systems on AIDA-CoNLL development set order to verify our claim that the entity embeddings from BERT better capture entity type information than those from <cite>Ganea and Hofmann (2017)</cite> , we carry out an entity type prediction task based on its entity embedding.
differences
eb5ef34dd9c3845cd27c33242d5316_40
As shown in Table 3 , our proposed entity embedding from BERT significantly outperforms the entity embedding proposed by <cite>Ganea and Hofmann (2017)</cite> on three typing sys-tems FIGER, BBN and OntoNotes fine .
differences
eb5ef34dd9c3845cd27c33242d5316_41
This demonstrates that our proposed entity embeddings from BERT indeed capture better latent entity type information than <cite>Ganea and Hofmann (2017)</cite> .
differences
eb5ef34dd9c3845cd27c33242d5316_42
This indicates that <cite>Ganea and Hofmann (2017)</cite> produces many type errors due to its inability to consider the entity type information in mention context.
uses
eb5ef34dd9c3845cd27c33242d5316_44
Besides, there are 22.95% remaining type errors which are due to global modeling problem which shows the limitation of the global modeling method of <cite>Ganea and Hofmann (2017)</cite> .
uses
eb5ef34dd9c3845cd27c33242d5316_45
It is nature to conjecture that we can also correct type errors by incorporating explicit type information into <cite>Ganea and Hofmann (2017)</cite> .
extends
eb5ef34dd9c3845cd27c33242d5316_46
DCA is a global entity linking model featuring better efficiency and effectiveness than that of <cite>Ganea and Hofmann (2017)</cite> by breaking the "all-mention coherence" assumption.
differences
eb5ef34dd9c3845cd27c33242d5316_48
We follow Papernot and Mc-Daniel (2018) <cite>Ganea and Hofmann (2017)</cite> and BERT based entity representation space neighbour in the context representation space.
uses
eb5ef34dd9c3845cd27c33242d5316_49
We also retrieve nearest entities in the embedding space of <cite>Ganea and Hofmann (2017)</cite> and ours. As we can see, we query STEVE JOBS, the nearest entity in <cite>Ganea and Hofmann (2017)</cite> is APPLE INC.
uses
eb5ef34dd9c3845cd27c33242d5316_50
Another example is when we query NA-TIONAL BASKETBALL ASSOCIATION, the most similar entities in <cite>Ganea and Hofmann (2017)</cite> are NBA teams which are topically related, while the entities retrieved by our approach are all basketball leagues.
differences
eb5ef34dd9c3845cd27c33242d5316_51
Then we integrate a BERT-based entity similarity into the local model of the state-of-the-art method by (<cite>Ganea and Hofmann 2017</cite>) .
uses
ebd4488438579946c23904cc0f5932_0
To our knowledge, <cite>Gildea and Jurafsky (2000)</cite> is the only work that uses FrameNet to build a statistical semantic classifier.
similarities
ebd4488438579946c23904cc0f5932_1
<cite>Gildea and Jurafsky (2000)</cite> describe a system that uses completely syntactic features to classify the Frame Elements in a sentence.
background
ebd4488438579946c23904cc0f5932_2
Frame: We extend <cite>Gildea and Jurafsky (2000)</cite> 's initial effort in three ways.
extends differences
ebd4488438579946c23904cc0f5932_3
Training (32,251 sentences), development (3,491 sentences), and held out test sets (3,398 sentences) were generated from the June 2002 FrameNet release following the divisions used in <cite>Gildea and Jurafsky (2000)</cite> 1 .
uses
ebd4488438579946c23904cc0f5932_4
Because human-annotated syntactic information could only be obtained for a subset of their data, the training, development, and test sets used here are approximately 10% smaller than those used in <cite>Gildea and Jurafsky (2000)</cite> .
differences
ebd4488438579946c23904cc0f5932_5
Due to data sparsity issues, we do not calculate this model directly, but rather, model various feature combinations as described in <cite>Gildea and Jurafsky (2000)</cite> .
extends differences
ebd4488438579946c23904cc0f5932_6
2 <cite>Gildea and Jurafsky (2000)</cite> use 36995 training, 4000 development, and 3865 test sentences. They do not report results using hand annotated syntactic information.
differences
ebd4488438579946c23904cc0f5932_7
As a further analysis, we have examined the performance of our base ME model on the same test set as that used in <cite>Gildea and Jurafsky (2000)</cite> .
similarities uses
ebd4488438579946c23904cc0f5932_8
Following <cite>Gildea and Jurafsky (2000)</cite> , automatic extraction of grammatical information here is limited to the governing category of a Noun Phrase.
differences
ec3702a6b30057fcae65ca297656d2_0
Examples of this approach are rarer, and we briefly mention two: Enright and Kondrak (2007) use singleton words (hapax legomena) to represent documents in a bilingual collection for the task of detecting document translation pairs, and <cite>Krstovski and Smith (2011)</cite> construct a vocabulary of overlapping words to represent documents in multilingual collections.
background
ec3702a6b30057fcae65ca297656d2_1
Examples of this approach are rarer, and we briefly mention two: Enright and Kondrak (2007) use singleton words (hapax legomena) to represent documents in a bilingual collection for the task of detecting document translation pairs, and <cite>Krstovski and Smith (2011)</cite> construct a vocabulary of overlapping words to represent documents in multilingual collections. The latter approach demonstrates high precision vs. recall values on various language pairs from different languages and writing systems when detecting translation pairs on a document level such as Europarl sessions. Recently proposed approaches, such as (Klementiev et al., 2012) use monolingual corpora to estimate phrase-based SMT parameters. Unlike our paper, however, they do not demonstrate an end-toend SMT system trained without any parallel data.
differences
ec3702a6b30057fcae65ca297656d2_2
Our bootstrapping approach (Figure 1 ) is a twostage system that used the Overlapping Cosine Distance (OCD) approach of <cite>Krstovski and Smith (2011)</cite> as its first step.
uses
ec3702a6b30057fcae65ca297656d2_3
While the number of overlapping words is dependent on the families of the source and target languages and their orthography, <cite>Krstovski and Smith (2011)</cite> showed that this approach yields good results across language pairs from different families and writing systems such as English-Greek, English-Bulgarian and EnglishArabic where, as one would expect, most shared words are numbers and named entities.
background
ec5897c392b05cb8712feadfc6d2bf_0
<cite>Hatzivassiloglou and McKeown (1993)</cite> duster adjectives into partitions and present an interesting evaluation to compare the generated adjective classes against those provided by an expert.
background
ec5897c392b05cb8712feadfc6d2bf_1
<cite>Hatzivassiloglou and McKeown (1993)</cite> duster adjectives into partitions and present an interesting evaluation to compare the generated adjective classes against those provided by an expert. Their evaluation scheme bases the comparison between two classes on the presence or absence of pairs of words in them. Their approach involves filling in a YES-NO contingency table based on whether a pair of words (adjectives, in their case) is classified in the same class by the human expert and by the system. This method works very well for partitions. However, if it is used to evaluate sets of classes where the classes may be potentiaily overlapping, their technique yields a weaker measure since the same word pair could possibly be present in more than one class. An ideal scheme used to evaluate semantic classes should be able to handle overlapping classes (as o1>. posed to partitions) as well as hierarchies. The technique proposed by Hatzivassiloglou and McKeown does not do a good job of evaluating either of these. In this paper, we present an evaluation methodology which makes it possible to properly evaluate over- In the discussion that follows, the word "clustering" is used to refer to the set of classes that may be either provided by an expert or generated by the system, and the word "class" is used to refer to a single class in the clustering.
motivation
ec5897c392b05cb8712feadfc6d2bf_2
We have adopted the F-measure<cite> (Hatzivassiloglou and McKeown, 1993</cite>; Chincor, 1992) .
uses
ec5897c392b05cb8712feadfc6d2bf_3
Once all classes in the two clusterings have been accounted for, calculate the precision, recall, and F-measure as explained in<cite> (Hatzivassiloglou and McKeown, 1993)</cite> .
uses
ecb6e93a5254b86ef49a5ffd0a52a0_0
We focus on the first two tasks. A non-word is a sequence of letters that is not a possible word in the language in any context, e.g., English * thier. Once a sequence of letters has been determined to be a non-word, isolatedword error correction is the process of determining the appropriate word to substitute for the non-word. Given a sequence of letters, there are thus two main subtasks: 1) determine whether this is a nonword, 2) if so, select and rank candidate words as potential corrections to present to the writer. The first subtask can be accomplished by searching for the sequence of letters in a word list. The second subtask can be stated as follows<cite> (Brill and Moore, 2000)</cite> : Given an alphabet Σ, a word list D of strings ∈ Σ * , and a string r / ∈ D and ∈ Σ * , find w ∈ D such that w is the most likely correction.
uses
ecb6e93a5254b86ef49a5ffd0a52a0_1
The spelling error model proposed by <cite>Brill and Moore (2000)</cite> allows generic string edit operations up to a certain length.
background
ecb6e93a5254b86ef49a5ffd0a52a0_2
<cite>Brill and Moore (2000)</cite> estimate the probability of each edit from a corpus of spelling errors.
background
ecb6e93a5254b86ef49a5ffd0a52a0_3
Toutanova and Moore (2002) extend <cite>Brill and Moore (2000)</cite> to consider edits over both letter sequences and sequences of phones in the pronunciations of the word and misspelling.
background
ecb6e93a5254b86ef49a5ffd0a52a0_4
They show that including pronunciation information improves performance as compared to <cite>Brill and Moore (2000)</cite> .
background
ecb6e93a5254b86ef49a5ffd0a52a0_5
The spelling correction models from <cite>Brill and Moore (2000)</cite> and Toutanova and Moore (2002) use the noisy channel model approach to determine the types and weights of edit operations.
background
ecb6e93a5254b86ef49a5ffd0a52a0_6
1 <cite>Brill and Moore (2000)</cite> allow all edit operations α → β where Σ is the alphabet and α, β ∈ Σ * , with a constraint on the length of α and β.
background
ecb6e93a5254b86ef49a5ffd0a52a0_7
This error model over letters, called P L , is approximated by <cite>Brill and Moore (2000)</cite> as shown in Figure 1 by considering only the pair of partitions of w and r with the maximum product of the probabilities of individual substitutions.
background
ecb6e93a5254b86ef49a5ffd0a52a0_8
The method, which is described in detail in <cite>Brill and Moore (2000)</cite> , involves aligning the letters in pairs of words and misspellings, expanding each alignment with up to N neighboring alignments, and calculating the probability of each α → β alignment.
background
ecb6e93a5254b86ef49a5ffd0a52a0_10
Toutanova and Moore (2002) describe an extension to <cite>Brill and Moore (2000)</cite> where the same noisy channel error model is used to model phone sequences instead of letter sequences.
background
ecb6e93a5254b86ef49a5ffd0a52a0_11
7 In order to rank the words as candidate corrections for a misspelling r, P L (r|w) and P P HL (r|w) are calculated for each word in the word list using the algorithm described in <cite>Brill and Moore (2000)</cite> .
uses
ece5f95d3c616ceeb0b3061e606b41_0
In particular, we are interested in the word2vec package available in<cite> (Mikolov et al., 2013a)</cite> .
motivation
ece5f95d3c616ceeb0b3061e606b41_1
The basic architecture that we use to build our models is CBOW<cite> (Mikolov et al., 2013a)</cite> .
uses
ece5f95d3c616ceeb0b3061e606b41_2
We chose these parameters for our system to obtain comparable results to the ones in<cite> (Mikolov et al., 2013a</cite> ) for a CBOW architecture but trained with 783 million words (50.4%).
similarities motivation
ee219d599e0e0c2bbebc1849863005_0
This mismatch of needs has motivated various proposals to reconstruct missing entries, in WALS and other databases, from known entries (Daumé III and Campbell, 2007; Daumé III, 2009; Coke et al., 2016; <cite>Littell et al., 2017)</cite> .
background
ee219d599e0e0c2bbebc1849863005_1
We calculate these feature vectors using an NMT model trained on 1017 languages, and use them for typlogy prediction both on their own and in composite with feature vectors from previous work based on the genetic and geographic distance between languages<cite> (Littell et al., 2017)</cite> .
uses
ee219d599e0e0c2bbebc1849863005_2
Typology Database: To perform our analysis, we use the URIEL language typology database<cite> (Littell et al., 2017)</cite> , which is a collection of binary features extracted from multiple typological, phylogenetic, and geographical databases such as WALS (World Atlas of Language Structures) (Collins and Kayne, 2011) , PHOIBLE (Moran et al., 2014) , Ethnologue (Lewis et al., 2015) , and Glottolog (Hammarström et al., 2015) .
uses
ee219d599e0e0c2bbebc1849863005_3
As an alternative that does not necessarily require pre-existing knowledge of the typological features in the language at hand,<cite> Littell et al. (2017)</cite> have proposed a method for inferring typological features directly from the language's k nearest neighbors (k-NN) according to geodesic distance (distance on the Earth's surface) and genetic distance (distance according to a phylogenetic family tree).
background
ee219d599e0e0c2bbebc1849863005_4
As an alternative that does not necessarily require pre-existing knowledge of the typological features in the language at hand,<cite> Littell et al. (2017)</cite> have proposed a method for inferring typological features directly from the language's k nearest neighbors (k-NN) according to geodesic distance (distance on the Earth's surface) and genetic distance (distance according to a phylogenetic family tree). In our experiments, our baseline uses this method by taking the 3-NN for each language according to normalized geodesic+genetic distance, and calculating an average feature vector of these three neighbors.
uses
eebf1edb6dbd3e58a904eff309f548_0
Language understanding is modeled as the task of converting natural language questions into queries through intermediate logical forms, with the popular two approaches including: CCG parsing (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Zettlemoyer and Collins, 2009; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011; Krishnamurthy and Mitchell, 2012; Kwiatkowski et al., 2013; Cai and Yates, 2013a) , and dependencybased compositional semantics (Liang et al., 2011; <cite>Berant et al., 2013</cite>; Berant and Liang, 2014) .
background
eebf1edb6dbd3e58a904eff309f548_1
While our conclusions should hold generally for similar KBs, we will focus on Freebase, such as explored by Krishnamurthy and Mitchell (2012) , and then others such as Cai and Yates (2013a) and<cite> Berant et al. (2013)</cite> .
uses
eebf1edb6dbd3e58a904eff309f548_2
We compare two open-source, state-ofthe-art systems on the task of Freebase QA: the semantic parsing system SEMPRE<cite> (Berant et al., 2013)</cite> , and the IE system jacana-freebase (Yao and Van Durme, 2014) .
uses
eebf1edb6dbd3e58a904eff309f548_3
A major distinction between the work of<cite> Berant et al. (2013)</cite> and Yao and Van Durme (2014) is the ability of the former to represent, and compose, aggregation operators (such as argmax, or count), as well as integrate disparate pieces of information.
background
eebf1edb6dbd3e58a904eff309f548_4
SEMPRE 3 is an open-source system for training semantic parsers, that has been utilized to train a semantic parser against Freebase by<cite> Berant et al. (2013)</cite> .
background
eebf1edb6dbd3e58a904eff309f548_5
Both<cite> Berant et al. (2013)</cite> and Yao and Van Durme (2014) tested their systems on the WEBQUESTIONS dataset, which contains 3778 training questions and 2032 test questions collected from the Google Suggest API.
background
ef6bd5e57196c013d7d0436e5b0ca5_0
The Fact Extraction and VERification (FEVER) task <cite>(Thorne et al., 2018)</cite> focuses on verification of textual claims against evidence.
background
ef6bd5e57196c013d7d0436e5b0ca5_1
The Fact Extraction and VERification (FEVER) task <cite>(Thorne et al., 2018)</cite> focuses on verification of textual claims against evidence. This paper describes our participating system in the FEVER shared task.
uses background
ef6bd5e57196c013d7d0436e5b0ca5_2
The architecture of our system is designed by following the official baseline system <cite>(Thorne et al., 2018)</cite> .
uses
ef6bd5e57196c013d7d0436e5b0ca5_3
To build the model, we utilize the NEARESTP dataset described in<cite> Thorne et al. (2018)</cite> .
uses
ef6bd5e57196c013d7d0436e5b0ca5_4
For parameter tuning and performance evaluation, we used a development and test datasets used in <cite>(Thorne et al., 2018)</cite> .
uses
ef742defff1c2bdf145f72796cf3af_0
We show how this approach can be combined with additional features, in particular, the discourse features presented by<cite> Jansen et al. (2014)</cite> .
uses
ef742defff1c2bdf145f72796cf3af_1
Most previous attempts to perform non-factoid answer reranking on CQA data are supervised, feature-based, learning-to-rank approaches<cite> (Jansen et al., 2014</cite>; Fried et al., 2015; Sharp et al., 2015) .
background
ef742defff1c2bdf145f72796cf3af_2
These methods represent the candidate answers as meaningful handcrafted features based on syntactic, semantic and discourse parses (Surdeanu et al., 2011; <cite>Jansen et al., 2014)</cite> , web correlation (Surdeanu et al., 2011) , and translation probabilities (Fried et al., 2015; Surdeanu et al., 2011) .
background
ef742defff1c2bdf145f72796cf3af_3
First, we present a novel neural approach to answer reranking that achieves competitive results on a public dataset of Yahoo! Answers (YA) that was previously introduced by<cite> Jansen et al. (2014)</cite> and later used in several other studies (Fried et al., 2015; Sharp et al., 2015; Bogdanova and Foster, 2016) .
uses
ef742defff1c2bdf145f72796cf3af_4
The main contributions of this paper are as follows: 1) we propose a novel neural approach for non-factoid answer reranking that achieves state-4 http://askubuntu.com of-the-art performance on a public dataset of Yahoo! Answers; 2) we combine this approach with an approach based on discourse features that was introduced by<cite> Jansen et al. (2014)</cite> , with the hybrid approach outperforming the neural approach and the previous state-of-the-art; 3) we introduce a new dataset of Ask Ubuntu questions and answers.
extends
ef742defff1c2bdf145f72796cf3af_5
Fried et al. (2015) improve on the lexical semantic models of<cite> Jansen et al. (2014)</cite> by exploiting indirect associations between words using higher-order models.
background
ef742defff1c2bdf145f72796cf3af_6
Based on the intuition that modelling questionanswer structure both within and across sentences could be useful,<cite> Jansen et al. (2014)</cite> propose an answer reranking model based on discourse features combined with lexical semantics.
background
ef742defff1c2bdf145f72796cf3af_7
Based on the intuition that modelling questionanswer structure both within and across sentences could be useful,<cite> Jansen et al. (2014)</cite> propose an answer reranking model based on discourse features combined with lexical semantics. We experimentally evaluate these discourse features -added to our model described in Section 3 (the additional features x ext ) and on their own.
uses background
ef742defff1c2bdf145f72796cf3af_8
We illustrate the feature extraction process of<cite> Jansen et al. (2014)</cite> in Figure 2 .
uses
ef742defff1c2bdf145f72796cf3af_9
Further details can be found in<cite> (Jansen et al., 2014)</cite> .
background
ef742defff1c2bdf145f72796cf3af_10
For comparability, we use the dataset created by<cite> Jansen et al. (2014)</cite> which contains 10K how questions from Yahoo! Answers. 50% of it is used for training, 25% for development and 25% for testing.
uses
ef742defff1c2bdf145f72796cf3af_11
Further details about this dataset can be found in<cite> (Jansen et al., 2014)</cite> .
background
ef742defff1c2bdf145f72796cf3af_12
For comparability, we use the dataset created by<cite> Jansen et al. (2014)</cite> which contains 10K how questions from Yahoo! Answers. 50% of it is used for training, 25% for development and 25% for testing. Further details about this dataset can be found in<cite> (Jansen et al., 2014)</cite> .
uses background
ef742defff1c2bdf145f72796cf3af_13
Following<cite> Jansen et al. (2014)</cite> and Fried et al. (2015) , we implement two baselines: the baseline that selects an answer randomly and the candidate retrieval (CR) baseline.
uses
ef742defff1c2bdf145f72796cf3af_14
The CR baseline uses the same scoring as in<cite> Jansen et al. (2014)</cite> : the questions and the candidate answers are represented using tf-idf over lemmas; the candidate answers are ranked according to their cosine similarity to the respective question.
uses background
ef742defff1c2bdf145f72796cf3af_15
On the YA dataset, we also compare our results to the ones reported by<cite> Jansen et al. (2014)</cite> and by Bogdanova and Foster (2016) .
uses