title
stringlengths
5
342
author
stringlengths
3
2.17k
year
int64
1.95k
2.02k
abstract
stringlengths
0
12.7k
pages
stringlengths
1
702
queryID
stringlengths
4
40
query
stringlengths
1
300
paperID
stringlengths
0
40
include
int64
0
1
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
Automatic Extraction of Implicit Interpretations from Modal Constructions
Sanders, Jordan and Blanco, Eduardo
2,016
nan
1098--1107
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
813779418a613d1faecd7b1deb9b4456121a9b7e
0
Factorizing Complex Models: A Case Study in Mention Detection
Florian, Radu and Jing, Hongyan and Kambhatla, Nanda and Zitouni, Imed
2,006
nan
473--480
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
3221a2c32488439c61bcfad832c50917e1ef3bdf
1
Translation Context Sensitive {WSD}
Specia, Lucia and das Gra{\c{c}}as Volpe Nunes, Maria and Stevenson, Mark
2,006
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
cffc77dc35c179166bb37124a759585cbcfd5d8a
0
End-to-End Trainable Attentive Decoder for Hierarchical Entity Classification
Karn, Sanjeev and Waltinger, Ulli and Sch{\"u}tze, Hinrich
2,017
We address fine-grained entity classification and propose a novel attention-based recurrent neural network (RNN) encoder-decoder that generates paths in the type hierarchy and can be trained end-to-end. We show that our model performs better on fine-grained entity classification than prior work that relies on flat or local classifiers that do not directly model hierarchical structure.
752--758
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
7615ed4f35f84cc086b9ae8e421891f3d33c68a6
1
Using Gaze Data to Predict Multiword Expressions
Rohanian, Omid and Taslimipoor, Shiva and Yaneva, Victoria and Ha, Le An
2,017
In recent years gaze data has been increasingly used to improve and evaluate NLP models due to the fact that it carries information about the cognitive processing of linguistic phenomena. In this paper we conduct a preliminary study towards the automatic identification of multiword expressions based on gaze features from native and non-native speakers of English. We report comparisons between a part-of-speech (POS) and frequency baseline to: i) a prediction model based solely on gaze data and ii) a combined model of gaze data, POS and frequency. In spite of the challenging nature of the task, best performance was achieved by the latter. Furthermore, we explore how the type of gaze data (from native versus non-native speakers) affects the prediction, showing that data from the two groups is discriminative to an equal degree for the task. Finally, we show that late processing measures are more predictive than early ones, which is in line with previous research on idioms and other formulaic structures.
601--609
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
6cbb05739c8d033eb027222c1a395391d877bc61
0
A Maximum Entropy Approach to Natural Language Processing
Berger, Adam L. and Della Pietra, Stephen A. and Della Pietra, Vincent J.
1,996
nan
39--71
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
fb486e03369a64de2d5b0df86ec0a7b55d3907db
1
Extracting Topics from Texts Based on Situations
Ma, Zhiyi and Zhan, Xuegong and Yao, Tianshun
1,996
nan
357--362
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f8c4d4bd5f119e1d29e7fa38e81de6316817bda3
0
Attributed and Predictive Entity Embedding for Fine-Grained Entity Typing in Knowledge Bases
Jin, Hailong and Hou, Lei and Li, Juanzi and Dong, Tiansi
2,018
Fine-grained entity typing aims at identifying the semantic type of an entity in KB. Type information is very important in knowledge bases, but are unfortunately incomplete even in some large knowledge bases. Limitations of existing methods are either ignoring the structure and type information in KB or requiring large scale annotated corpus. To address these issues, we propose an attributed and predictive entity embedding method, which can fully utilize various kinds of information comprehensively. Extensive experiments on two real DBpedia datasets show that our proposed method significantly outperforms 8 state-of-the-art methods, with 4.0{\%} and 5.2{\%} improvement in Mi-F1 and Ma-F1, respectively.
282--292
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
fc958f9f9876689acbe99ad80c508ed7ad0e40bf
1
{GKR}: the Graphical Knowledge Representation for semantic parsing
Kalouli, Aikaterini-Lida and Crouch, Richard
2,018
This paper describes the first version of an open-source semantic parser that creates graphical representations of sentences to be used for further semantic processing, e.g. for natural language inference, reasoning and semantic similarity. The Graphical Knowledge Representation which is output by the parser is inspired by the Abstract Knowledge Representation, which separates out conceptual and contextual levels of representation that deal respectively with the subject matter of a sentence and its existential commitments. Our representation is a layered graph with each sub-graph holding different kinds of information, including one sub-graph for concepts and one for contexts. Our first evaluation of the system shows an F-score of 85{\%} in accurately representing sentences as semantic graphs.
27--37
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
0e195eb08cb94a2aedadae06442ee2d4e0cc1016
0
Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing
Xiong, Wenhan and Wu, Jiawei and Lei, Deren and Yu, Mo and Chang, Shiyu and Guo, Xiaoxiao and Wang, William Yang
2,019
Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance. Such techniques, however, are not directly applicable to more open and practical scenarios where the type set is not restricted by KB schema and includes a vast number of free-form types. To model the underlying label correlations without access to manually annotated label structures, we introduce a novel label-relational inductive bias, represented by a graph propagation layer that effectively encodes both global label co-occurrence statistics and word-level similarities. On a large dataset with over 10,000 free-form types, the graph-enhanced model equipped with an attention-based matching module is able to achieve a much higher recall score while maintaining a high-level precision. Specifically, it achieves a 15.3{\%} relative F1 improvement and also less inconsistency in the outputs. We further show that a simple modification of our proposed graph layer can also improve the performance on a conventional and widely-tested dataset that only includes KB-schema types.
773--784
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
a0713d945b2e5c2bdeeba68399c8ac6ea84e0ca6
1
Sample Size in {A}rabic Authorship Verification
Ahmed, Hossam
2,019
nan
84--91
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f5f6cdf8deb778abc8469baed6824aeaf28289be
0
The Life and Death of Discourse Entities: Identifying Singleton Mentions
Recasens, Marta and de Marneffe, Marie-Catherine and Potts, Christopher
2,013
nan
627--633
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
0fdf52226bde594ca069b2768249a4b10da9255d
1
Annotating Legitimate Disagreement in Corpus Construction
Wong, Billy T.M. and Lee, Sophia Y.M.
2,013
nan
51--57
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
0e575e0c10cf734b491b7a11b2fd47f5dcd7c33e
0
Coreference in {W}ikipedia: Main Concept Resolution
Ghaddar, Abbas and Langlais, Phillippe
2,016
nan
229--238
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
da07e8d3951aeda87846f4b7db321576b48a2c60
1
Comparing Named-Entity Recognizers in a Targeted Domain: Handcrafted Rules vs Machine Learning
Partalas, Ioannis and Lopez, C{\'e}dric and Segond, Fr{\'e}d{\'e}rique
2,016
Comparing Named-Entity Recognizers in a Targeted Domain : Handcrafted Rules vs. Machine Learning Named-Entity Recognition concerns the classification of textual objects in a predefined set of categories such as persons, organizations, and localizations. While Named-Entity Recognition is well studied since 20 years, the application to specialized domains still poses challenges for current systems. We developed a rule-based system and two machine learning approaches to tackle the same task : recognition of product names, brand names, etc., in the domain of Cosmetics, for French. Our systems can thus be compared under ideal conditions. In this paper, we introduce both systems and we compare them.
389--395
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f64f17a5be7e9a5e05e5a9fce1c01058ba024d1d
0
Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks
Das, Rajarshi and Zaheer, Manzil and Reddy, Siva and McCallum, Andrew
2,017
Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. Universal schema can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing Memory networks to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on Spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5 F1 points.
358--365
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
2b2090eab4abe27e6e5e4ca94afaf82e511b63bd
1
Applying {BLAST} to Text Reuse Detection in {F}innish Newspapers and Journals, 1771-1910
Vesanto, Aleksi and Nivala, Asko and Rantala, Heli and Salakoski, Tapio and Salmi, Hannu and Ginter, Filip
2,017
nan
54--58
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f69a2cf20e8a80cc9da747e2273f230b39847515
0
Embeddings of Label Components for Sequence Labeling: A Case Study of Fine-grained Named Entity Recognition
Kato, Takuma and Abe, Kaori and Ouchi, Hiroki and Miyawaki, Shumpei and Suzuki, Jun and Inui, Kentaro
2,020
In general, the labels used in sequence labeling consist of different types of elements. For example, IOB-format entity labels, such as B-Person and I-Person, can be decomposed into span (B and I) and type information (Person). However, while most sequence labeling models do not consider such label components, the shared components across labels, such as Person, can be beneficial for label prediction. In this work, we propose to integrate label component information as embeddings into models. Through experiments on English and Japanese fine-grained named entity recognition, we demonstrate that the proposed method improves performance, especially for instances with low-frequency labels.
222--229
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
926588efc881ba9f4322ddfaae14555d058bbb46
1
Autoencoding Keyword Correlation Graph for Document Clustering
Chiu, Billy and Sahu, Sunil Kumar and Thomas, Derek and Sengupta, Neha and Mahdy, Mohammady
2,020
Document clustering requires a deep understanding of the complex structure of long-text; in particular, the intra-sentential (local) and inter-sentential features (global). Existing representation learning models do not fully capture these features. To address this, we present a novel graph-based representation for document clustering that builds a \textit{graph autoencoder} (GAE) on a Keyword Correlation Graph. The graph is constructed with topical keywords as nodes and multiple local and global features as edges. A GAE is employed to aggregate the two sets of features by learning a latent representation which can jointly reconstruct them. Clustering is then performed on the learned representations, using vector dimensions as features for inducing document classes. Extensive experiments on two datasets show that the features learned by our approach can achieve better clustering performance than other existing features, including term frequency-inverse document frequency and average embedding.
3974--3981
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e78b1c6cf0fbe935f12adf3a5ce3cde629252316
0
A Joint Named Entity Recognition and Entity Linking System
Stern, Rosa and Sagot, Beno{\^\i}t and B{\'e}chet, Fr{\'e}d{\'e}ric
2,012
nan
52--60
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
443e814ab3b87ea51a18d4a3925a0fadeca03a9a
1
Regular polysemy: A distributional model
Boleda, Gemma and Pad{\'o}, Sebastian and Utt, Jason
2,012
nan
151--160
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
969fdeafe142994f4bf41ffc35ffea235de2aa18
0
Mining Entity Types from Query Logs via User Intent Modeling
Pantel, Patrick and Lin, Thomas and Gamon, Michael
2,012
nan
563--571
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
c19b48e088983bf3ab71751000d78409293f4cf0
1
Nouvelle approche pour le regroupement des locuteurs dans des {\'e}missions radiophoniques et t{\'e}l{\'e}visuelles (New approach for speaker clustering of broadcast news) [in {F}rench]
Rouvier, Mickael and Meignier, Sylvain
2,012
nan
97--104
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ba643220c830a2db0b0d0e0aa3d556bc30d40b2b
0
Label Embedding for Zero-shot Fine-grained Named Entity Typing
Ma, Yukun and Cambria, Erik and Gao, Sa
2,016
Named entity typing is the task of detecting the types of a named entity in context. For instance, given {``}Eric is giving a presentation{''}, our goal is to infer that {`}Eric{'} is a speaker or a presenter and a person. Existing approaches to named entity typing cannot work with a growing type set and fails to recognize entity mentions of unseen types. In this paper, we present a label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings. In addition, we adapt a zero-shot learning framework that can predict both seen and previously unseen entity types. We perform evaluation on three benchmark datasets with two settings: 1) few-shots recognition where all types are covered by the training set; and 2) zero-shot recognition where fine-grained types are assumed absent from training set. Results show that prior knowledge encoded using our label embedding methods can significantly boost the performance of classification for both cases.
171--180
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
0d12035f96d795fef0d6b4f70340934dd3dd98a1
1
{ELMD}: An Automatically Generated Entity Linking Gold Standard Dataset in the Music Domain
Oramas, Sergio and Anke, Luis Espinosa and Sordo, Mohamed and Saggion, Horacio and Serra, Xavier
2,016
In this paper we present a gold standard dataset for Entity Linking (EL) in the Music Domain. It contains thousands of musical named entities such as Artist, Song or Record Label, which have been automatically annotated on a set of artist biographies coming from the Music website and social network Last.fm. The annotation process relies on the analysis of the hyperlinks present in the source texts and in a voting-based algorithm for EL, which considers, for each entity mention in text, the degree of agreement across three state-of-the-art EL systems. Manual evaluation shows that EL Precision is at least 94{\%}, and due to its tunable nature, it is possible to derive annotations favouring higher Precision or Recall, at will. We make available the annotated dataset along with evaluation data and the code.
3312--3317
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
a529e05211d685bbc6f80f1081e4784c325ea8d0
0
Fine-Grained Entity Typing via Hierarchical Multi Graph Convolutional Networks
Jin, Hailong and Hou, Lei and Li, Juanzi and Dong, Tiansi
2,019
This paper addresses the problem of inferring the fine-grained type of an entity from a knowledge base. We convert this problem into the task of graph-based semi-supervised classification, and propose Hierarchical Multi Graph Convolutional Network (HMGCN), a novel Deep Learning architecture to tackle this problem. We construct three kinds of connectivity matrices to capture different kinds of semantic correlations between entities. A recursive regularization is proposed to model the subClassOf relations between types in given type hierarchy. Extensive experiments with two large-scale public datasets show that our proposed method significantly outperforms four state-of-the-art methods.
4969--4978
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
074e3497b03366caf2e17acd59fb1c52ccf8be55
1
Multi-grained Attention with Object-level Grounding for Visual Question Answering
Huang, Pingping and Huang, Jianhui and Guo, Yuqing and Qiao, Min and Zhu, Yong
2,019
Attention mechanisms are widely used in Visual Question Answering (VQA) to search for visual clues related to the question. Most approaches train attention models from a coarse-grained association between sentences and images, which tends to fail on small objects or uncommon concepts. To address this problem, this paper proposes a multi-grained attention method. It learns explicit word-object correspondence by two types of word-level attention complementary to the sentence-image association. Evaluated on the VQA benchmark, the multi-grained attention model achieves competitive performance with state-of-the-art models. And the visualized attention maps demonstrate that addition of object-level groundings leads to a better understanding of the images and locates the attended objects more precisely.
3595--3600
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
144f4d5dcd0b13935ff0d0890c2ec37aa40039b1
0
Automatic Acquisition of Hyponyms from Large Text Corpora
Hearst, Marti A.
1,992
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
dbfd191afbbc8317577cbc44afe7156df546e143
1
The Acquisition of Lexical Knowledge from Combined Machine-Readable Dictionary Sources
Sanfilippo, Antonio and Poznatlski, Victor
1,992
nan
80--87
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
0616b0f5e6edce01f153081e53bd0152c8d0a4bd
0
Evaluating Pretrained Transformer-based Models on the Task of Fine-Grained Named Entity Recognition
Lothritz, Cedric and Allix, Kevin and Veiber, Lisa and Bissyand{\'e}, Tegawend{\'e} F. and Klein, Jacques
2,020
Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task and has remained an active research field. In recent years, transformer models and more specifically the BERT model developed at Google revolutionised the field of NLP. While the performance of transformer-based approaches such as BERT has been studied for NER, there has not yet been a study for the fine-grained Named Entity Recognition (FG-NER) task. In this paper, we compare three transformer-based models (BERT, RoBERTa, and XLNet) to two non-transformer-based models (CRF and BiLSTM-CNN-CRF). Furthermore, we apply each model to a multitude of distinct domains. We find that transformer-based models incrementally outperform the studied non-transformer-based models in most domains with respect to the F1 score. Furthermore, we find that the choice of domains significantly influenced the performance regardless of the respective data size or the model chosen.
3750--3760
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e25dc08340655401a034a90bf091c1a185c422b6
1
Cross-lingual Semantic Representation for {NLP} with {UCCA}
Abend, Omri and Dvir, Dotan and Hershcovich, Daniel and Prange, Jakob and Schneider, Nathan
2,020
This is an introductory tutorial to UCCA (Universal Conceptual Cognitive Annotation), a cross-linguistically applicable framework for semantic representation, with corpora annotated in English, German and French, and ongoing annotation in Russian and Hebrew. UCCA builds on extensive typological work and supports rapid annotation. The tutorial will provide a detailed introduction to the UCCA annotation guidelines, design philosophy and the available resources; and a comparison to other meaning representations. It will also survey the existing parsing work, including the findings of three recent shared tasks, in SemEval and CoNLL, that addressed UCCA parsing. Finally, the tutorial will present recent applications and extensions to the scheme, demonstrating its value for natural language processing in a range of languages and domains.
1--9
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
4ddb820ad3dceeb777967d98fbae6711de5077eb
0
Fine-Grained Entity Typing in Hyperbolic Space
L{\'o}pez, Federico and Heinzerling, Benjamin and Strube, Michael
2,019
How can we represent hierarchical information present in large type inventories for entity typing? We study the suitability of hyperbolic embeddings to capture hierarchical relations between mentions in context and their target types in a shared vector space. We evaluate on two datasets and propose two different techniques to extract hierarchical information from the type inventory: from an expert-generated ontology and by automatically mining the dataset. The hyperbolic model shows improvements in some but not all cases over its Euclidean counterpart. Our analysis suggests that the adequacy of this geometry depends on the granularity of the type inventory and the representation of its distribution.
169--180
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
cded59d31ab4841baa1517ff0359f0f2f4b865f5
1
The Impact of Semantic Linguistic Features in Relation Extraction: A Logical Relational Learning Approach
Lima, Rinaldo and Espinasse, Bernard and Freitas, Frederico
2,019
Relation Extraction (RE) consists in detecting and classifying semantic relations between entities in a sentence. The vast majority of the state-of-the-art RE systems relies on morphosyntactic features and supervised machine learning algorithms. This paper tries to answer important questions concerning both the impact of semantic based features, and the integration of external linguistic knowledge resources on RE performance. For that, a RE system based on a logical and relational learning algorithm was used and evaluated on three reference datasets from two distinct domains. The yielded results confirm that the classifiers induced using the proposed richer feature set outperformed the classifiers built with morphosyntactic features in average 4{\%} (F1-measure).
648--654
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
72f07997e910d48a7e6557441c893ddca107c6f8
0
Automatic construction of a hypernym-labeled noun hierarchy from text
Caraballo, Sharon A.
1,999
nan
120--126
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
aab329ef59d21060c31afce413f6e447b1c0b8b7
1
Improved Alignment Models for Statistical Machine Translation
Och, Franz Josef and Tillmann, Christoph and Ney, Hermann
1,999
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8b0495331238da6c0e7be0bfdb9b5453b33c1f98
0
Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking
Murty, Shikhar and Verga, Patrick and Vilnis, Luke and Radovanovic, Irena and McCallum, Andrew
2,018
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing, and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. We also present two new human-annotated datasets containing wide and deep hierarchies which we will release to the community to encourage further research in this direction: \textit{MedMentions}, a collection of PubMed abstracts in which 246k mentions have been mapped to the massive UMLS ontology; and \textit{TypeNet}, which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k entity types. In experiments on all three datasets we show substantial gains from hierarchy-aware training.
97--109
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
35112824817b78156a6b2bcd2a5622a26ee16600
1
Findings of the {E}2{E} {NLG} Challenge
Du{\v{s}}ek, Ond{\v{r}}ej and Novikova, Jekaterina and Rieser, Verena
2,018
This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems. Recent end-to-end generation systems are promising since they reduce the need for data annotation. However, they are currently limited to small, delexicalised datasets. The E2E NLG shared task aims to assess whether these novel approaches can generate better-quality output by learning from a dataset containing higher lexical richness, syntactic complexity and diverse discourse phenomena. We compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures {--} with the majority implementing sequence-to-sequence models (seq2seq) {--} as well as systems based on grammatical rules and templates.
322--328
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
92cfd6d2eb957805aaf4786dacb484081a469e80
0
No Noun Phrase Left Behind: Detecting and Typing Unlinkable Entities
Lin, Thomas and {Mausam} and Etzioni, Oren
2,012
nan
893--903
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
cffb556ee3d1e188f4688b71a8608bbe1883bc49
1
Language Richness of the Web
Majli{\v{s}}, Martin and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k
2,012
We have built a corpus containing texts in 106 languages from texts available on the Internet and on Wikipedia. The W2C Web Corpus contains 54.7{\textasciitilde}GB of text and the W2C Wiki Corpus contains 8.5{\textasciitilde}GB of text. The W2C Web Corpus contains more than 100{\textasciitilde}MB of text available for 75 languages. At least 10{\textasciitilde}MB of text is available for 100 languages. These corpora are a unique data source for linguists, since they outclass all published works both in the size of the material collected and the number of languages covered. This language data resource can be of use particularly to researchers specialized in multilingual technologies development. We also developed software that greatly simplifies the creation of a new text corpus for a given language, using text materials freely available on the Internet. Special attention was given to components for filtering and de-duplication that allow to keep the material quality very high.
2927--2934
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
33e7d35324cd138730022db297c358fc66149160
0
Collective Entity Resolution with Multi-Focal Attention
Globerson, Amir and Lazic, Nevena and Chakrabarti, Soumen and Subramanya, Amarnag and Ringgaard, Michael and Pereira, Fernando
2,016
nan
621--631
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
4988a269e9f61c6fd1da502e34648b93dfd1a54d
1
Results of the 4th edition of {B}io{ASQ} Challenge
Krithara, Anastasia and Nentidis, Anastasios and Paliouras, Georgios and Kakadiaris, Ioannis
2,016
nan
1--7
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
2d5108706bfd88506c27adceb87ce46e75f9cad2
0
The Automatic Content Extraction ({ACE}) Program {--} Tasks, Data, and Evaluation
Doddington, George and Mitchell, Alexis and Przybocki, Mark and Ramshaw, Lance and Strassel, Stephanie and Weischedel, Ralph
2,004
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
0617dd6924df7a3491c299772b70e90507b195dc
1
Generating Paired Transliterated-cognates Using Multiple Pronunciation Characteristics from Web corpora
Kuo, Jin-Shea and Yang, Ying-Kuei
2,004
nan
275--282
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
adfbf15a0059f68af95c2836b890577807d66550
0
Transforming {W}ikipedia into Named Entity Training Data
Nothman, Joel and Curran, James R. and Murphy, Tara
2,008
nan
124--132
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
04ca48e573c0800fc572f2af1d475dd2645e840a
1
Automatic Extraction of Briefing Templates
Das, Dipanjan and Kumar, Mohit and Rudnicky, Alexander I.
2,008
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e1dfe8ad1f4cfb45484118d3faf0f13505e483e1
0
{AFET}: Automatic Fine-Grained Entity Typing by Hierarchical Partial-Label Embedding
Ren, Xiang and He, Wenqi and Qu, Meng and Huang, Lifu and Ji, Heng and Han, Jiawei
2,016
nan
1369--1378
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ee42c6c3c5db2f0eb40faacf6e3b80035a645287
1
Understanding Discourse on Work and Job-Related Well-Being in Public Social Media
Liu, Tong and Homan, Christopher and Ovesdotter Alm, Cecilia and Lytle, Megan and Marie White, Ann and Kautz, Henry
2,016
nan
1044--1053
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e32666501e824d1cfdc19534b0ed009c7268cd8a
0
{J}-{NERD}: Joint Named Entity Recognition and Disambiguation with Rich Linguistic Features
Nguyen, Dat Ba and Theobald, Martin and Weikum, Gerhard
2,016
Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL{'}03, ACE{'}05, and ClueWeb{'}09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.
215--229
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
181e0f95a61d769e01f6d2c520d60d4df228c5c0
1
{F}rench Learners Audio Corpus of {G}erman Speech ({FLACGS})
Wottawa, Jane and Adda-Decker, Martine
2,016
The French Learners Audio Corpus of German Speech (FLACGS) was created to compare German speech production of German native speakers (GG) and French learners of German (FG) across three speech production tasks of increasing production complexity: repetition, reading and picture description. 40 speakers, 20 GG and 20 FG performed each of the three tasks, which in total leads to approximately 7h of speech. The corpus was manually transcribed and automatically aligned. Analysis that can be performed on this type of corpus are for instance segmental differences in the speech production of L2 learners compared to native speakers. We chose the realization of the velar nasal consonant engma. In spoken French, engma does not appear in a VCV context which leads to production difficulties in FG. With increasing speech production complexity (reading and picture description), engma is realized as engma + plosive by FG in over 50{\%} of the cases. The results of a two way ANOVA with unequal sample sizes on the durations of the different realizations of engma indicate that duration is a reliable factor to distinguish between engma and engma + plosive in FG productions compared to the engma productions in GG in a VCV context. The FLACGS corpus allows to study L2 production and perception.
3215--3219
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e5a5dd2a6b9d3aa69f57a7180fd129b338d82866
0
Entity Linking via Joint Encoding of Types, Descriptions, and Context
Gupta, Nitish and Singh, Sameer and Roth, Dan
2,017
For accurate entity linking, we need to capture various information aspects of an entity, such as its description in a KB, contexts in which it is mentioned, and structured knowledge. Additionally, a linking system should work on texts from different domains without requiring domain-specific training data or hand-engineered features. In this work we present a neural, modular entity linking system that learns a unified dense representation for each entity using multiple sources of information, such as its description, contexts around its mentions, and its fine-grained types. We show that the resulting entity linking system is effective at combining these sources, and performs competitively, sometimes out-performing current state-of-the-art systems across datasets, without requiring any domain-specific training data or hand-engineered features. We also show that our model can effectively {``}embed{''} entities that are new to the KB, and is able to link its mentions accurately.
2681--2690
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
2927dfc481446568fc9108795570eb4d416be021
1
Efficient Attention using a Fixed-Size Memory Representation
Britz, Denny and Guan, Melody and Luong, Minh-Thang
2,017
The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20{\%} for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.
392--400
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
87fc28cbb193a3bc100e13a4a57a8dc9ce7e31a3
0
Structured Generative Models for Unsupervised Named-Entity Clustering
Elsner, Micha and Charniak, Eugene and Johnson, Mark
2,009
nan
164--172
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
dfedad21048cafdb7066cd2caeba13228e83d4eb
1
Fertility-based Source-Language-biased Inversion Transduction Grammar for Word Alignment
Huang, Chung-Chi and Chang, Jason S.
2,009
nan
1--18
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
498b24e27e2a2cec829afbb1163cd456a01ba668
0
Instance-Based Ontology Population Exploiting Named-Entity Substitution
Giuliano, Claudio and Gliozzo, Alfio
2,008
nan
265--272
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
aa480756d7ecee36e500ed05e13d0eb3bfe0aa2d
1
The {M}ove{O}n Motorcycle Speech Corpus
Winkler, Thomas and Kostoulas, Theodoros and Adderley, Richard and Bonkowski, Christian and Ganchev, Todor and K{\"o}hler, Joachim and Fakotakis, Nikos
2,008
A speech and noise corpus dealing with the extreme conditions of the motorcycle environment is developed within the MoveOn project. Speech utterances in British English are recorded and processed approaching the issue of command and control and template driven dialog systems on the motorcycle. The major part of the corpus comprises noisy speech and environmental noise recorded on a motorcycle, but several clean speech recordings in a silent environment are also available. The corpus development focuses on distortion free recordings and accurate descriptions of both recorded speech and noise. Not only speech segments are annotated but also annotation of environmental noise is performed. The corpus is a small-sized speech corpus with about 12 hours of clean and noisy speech utterances and about 30 hours of segments with environmental noise without speech. This paper addresses the motivation and development of the speech corpus and finally presents some statistics and results of the database creation.
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
beae598d40e025802505ef076d95e8d3f2ca2d3c
0
Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs
Kozareva, Zornitsa and Riloff, Ellen and Hovy, Eduard
2,008
nan
1048--1056
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
194587b9d80e29aa6e50e9b0c628b581b66ea364
1
Machine Translation for {I}ndonesian and {T}agalog
Laugher, Brianna and MacLeod, Ben
2,008
Kataku is a hybrid MT system for Indonesian to English and English to Indonesian translation, available on Windows, Linux and web-based platforms. This paper briefly presents the technical background to Kataku, some of its use cases and extensions. Kataku is the flagship product of ToggleText, a language technology company based in Melbourne, Australia.
397--401
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
5ab505f24d2b3f5391ed64b1dbf6796411f6f1fd
0
{S}cience{E}xam{CER}: A High-Density Fine-Grained Science-Domain Corpus for Common Entity Recognition
Smith, Hannah and Zhang, Zeyu and Culnan, John and Jansen, Peter
2,020
Named entity recognition identifies common classes of entities in text, but these entity labels are generally sparse, limiting utility to downstream tasks. In this work we present ScienceExamCER, a densely-labeled semantic classification corpus of 133k mentions in the science exam domain where nearly all (96{\%}) of content words have been annotated with one or more fine-grained semantic class labels including taxonomic groups, meronym groups, verb/action groups, properties and values, and synonyms. Semantic class labels are drawn from a manually-constructed fine-grained typology of 601 classes generated through a data-driven analysis of 4,239 science exam questions. We show an off-the-shelf BERT-based named entity recognition model modified for multi-label classification achieves an accuracy of 0.85 F1 on this task, suggesting strong utility for downstream tasks in science domain question answering requiring densely-labeled semantic classification.
4529--4546
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e9a679c4215762478f1b849101b09102ed39c6b1
1
Exclusive Hierarchical Decoding for Deep Keyphrase Generation
Chen, Wang and Chan, Hou Pong and Li, Piji and King, Irwin
2,020
Keyphrase generation (KG) aims to summarize the main ideas of a document into a set of keyphrases. A new setting is recently introduced into this problem, in which, given a document, the model needs to predict a set of keyphrases and simultaneously determine the appropriate number of keyphrases to produce. Previous work in this setting employs a sequential decoding process to generate keyphrases. However, such a decoding method ignores the intrinsic hierarchical compositionality existing in the keyphrase set of a document. Moreover, previous work tends to generate duplicated keyphrases, which wastes time and computing resources. To overcome these limitations, we propose an exclusive hierarchical decoding framework that includes a hierarchical decoding process and either a soft or a hard exclusion mechanism. The hierarchical decoding process is to explicitly model the hierarchical compositionality of a keyphrase set. Both the soft and the hard exclusion mechanisms keep track of previously-predicted keyphrases within a window size to enhance the diversity of the generated keyphrases. Extensive experiments on multiple KG benchmark datasets demonstrate the effectiveness of our method to generate less duplicated and more accurate keyphrases.
1095--1105
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ba46ece6feba34c408d081a8dce66f0ecf4b7a60
0
Two/Too Simple Adaptations of {W}ord2{V}ec for Syntax Problems
Ling, Wang and Dyer, Chris and Black, Alan W. and Trancoso, Isabel
2,015
nan
1299--1304
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
b92513dac9d5b6a4683bcc625b94dd1ced98734e
1
Gradiant-Analytics: Training Polarity Shifters with {CRF}s for Message Level Polarity Detection
Cerezo-Costas, H{\'e}ctor and Celix-Salgado, Diego
2,015
nan
539--544
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
a109aa9ab559dbdbd57df26ba2a5a08fba7aa34b
0
Knowledge Base Population: Successful Approaches and Challenges
Ji, Heng and Grishman, Ralph
2,011
nan
1148--1158
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
77d2698e8efadda698b0edb457cd8de75224bfa0
1
Using machine translation in computer-aided translation to suggest the target-side words to change
Espl{\`a}-Gomis, Miquel and S{\'a}nchez-Mart{\'\i}nez, Felipe and Forcada, Mikel L.
2,011
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ffe80022365aa36087f736a57e536dffea78ab53
0
Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings
Abhishek, Abhishek and Anand, Ashish and Awekar, Amit
2,017
Fine-grained entity type classification (FETC) is the task of classifying an entity mention to a broad set of types. Distant supervision paradigm is extensively used to generate training data for this task. However, generated training data assigns same set of labels to every mention of an entity without considering its local context. Existing FETC systems have two major drawbacks: assuming training data to be noise free and use of hand crafted features. Our work overcomes both drawbacks. We propose a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features. Our model treats training data as noisy and uses non-parametric variant of hinge loss function. Experiments show that the proposed model outperforms previous state-of-the-art methods on two publicly available datasets, namely FIGER (GOLD) and BBN with an average relative improvement of 2.69{\%} in micro-F1 score. Knowledge learnt by our model on one dataset can be transferred to other datasets while using same model or other FETC systems. These approaches of transferring knowledge further improve the performance of respective models.
797--807
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f4283dbf7883b1ab1a7fe01b58ebd627bcfdf008
1
Grounding Language by Continuous Observation of Instruction Following
Han, Ting and Schlangen, David
2,017
Grounded semantics is typically learnt from utterance-level meaning representations (e.g., successful database retrievals, denoted objects in images, moves in a game). We explore learning word and utterance meanings by continuous observation of the actions of an instruction follower (IF). While an instruction giver (IG) provided a verbal description of a configuration of objects, IF recreated it using a GUI. Aligning these GUI actions to sub-utterance chunks allows a simple maximum entropy model to associate them as chunk meaning better than just providing it with the utterance-final configuration. This shows that semantics useful for incremental (word-by-word) application, as required in natural dialogue, might also be better acquired from incremental settings.
491--496
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
0ceab2555088ce7ca49336f472f5902191661ff1
0
{M}essage {U}nderstanding {C}onference- 6: A Brief History
Grishman, Ralph and Sundheim, Beth
1,996
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
6723dda58e5e09089ec78ba42827b65859f030e2
1
{C}hinese String Searching Using the {KMP} Algorithm
Luk, Robert W.P.
1,996
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
5f0d19f9fbbb03b8fb0fad8e810734da0242a60c
0
Collective Cross-Document Relation Extraction Without Labelled Data
Yao, Limin and Riedel, Sebastian and McCallum, Andrew
2,010
nan
1013--1023
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
a3fa819575c78be3cbcc8aa394fd21a182dce669
1
Employing Machine Translation in Glocalization Tasks {--} A Use Case Study
Sch{\"u}tz, J{\"o}rg and Andr{\"a}, Sven Christian
2,010
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
7dafa546ad602a551fda556e1db89c522e506a9d
0
Class-Based \textit{n}-gram Models of Natural Language
Brown, Peter F. and Della Pietra, Vincent J. and deSouza, Peter V. and Lai, Jenifer C. and Mercer, Robert L.
1,992
nan
467--480
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
3de5d40b60742e3dfa86b19e7f660962298492af
1
{N}ew {Y}ork {U}niversity Description of the {PROTEUS} System as Used for {MUC}-4
Grishman, Ralph and Macleod, Catherine and Sterling, John
1,992
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
74384de9df7422e25c3e66749ec4674b474e13f8
0
Creating an Extended Named Entity Dictionary from {W}ikipedia
Higashinaka, Ryuichiro and Sadamitsu, Kugatsu and Saito, Kuniko and Makino, Toshiro and Matsuo, Yoshihiro
2,012
nan
1163--1178
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
06d44cb242b2454527e6e5e0b020664ab65a3059
1
String Re-writing Kernel
Bu, Fan and Li, Hang and Zhu, Xiaoyan
2,012
nan
449--458
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
32924bbd545430a09969fc700965f9e030c45e67
0
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Welbl, Johannes and Stenetorp, Pontus and Riedel, Sebastian
2,018
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently no resources exist to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence {---} effectively performing multihop, alias multi-step, inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information; and providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 54.5{\%} on an annotated test set, compared to human performance at 85.0{\%}, leaving ample room for improvement.
287--302
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
7d5cf22c70484fe217936c66741fb73b2a278bde
1
{E}i{TAKA} at {S}em{E}val-2018 Task 1: An Ensemble of N-Channels {C}onv{N}et and {XG}boost Regressors for Emotion Analysis of Tweets
Jabreel, Mohammed and Moreno, Antonio
2,018
This paper describes our system that has been used in Task1 Affect in Tweets. We combine two different approaches. The first one called N-Stream ConvNets, which is a deep learning approach where the second one is XGboost regressor based on a set of embedding and lexicons based features. Our system was evaluated on the testing sets of the tasks outperforming all other approaches for the Arabic version of valence intensity regression task and valence ordinal classification task.
193--199
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
58ee3e694bcaa126d9f2438dc0326824a0de5584
0
Improving Fine-grained Entity Typing with Entity Linking
Dai, Hongliang and Du, Donghong and Li, Xin and Song, Yangqiu
2,019
Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention. In this paper, we use entity linking to help with the fine-grained entity type classification process. We propose a deep neural model that makes predictions based on both the context and the information obtained from entity linking results. Experimental results on two commonly used datasets demonstrates the effectiveness of our approach. On both datasets, it achieves more than 5{\%} absolute strict accuracy improvement over the state of the art.
6210--6215
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
b74b272c7fe881614f3eb8c2504b037439571eec
1
Comparison of Diverse Decoding Methods from Conditional Language Models
Ippolito, Daphne and Kriz, Reno and Sedoc, Jo{\~a}o and Kustikova, Maria and Callison-Burch, Chris
2,019
While conditional language models have greatly improved in their ability to output high quality natural language, many NLP applications benefit from being able to generate a diverse set of candidate sequences. Diverse decoding strategies aim to, within a given-sized candidate list, cover as much of the space of high-quality outputs as possible, leading to improvements for tasks that rerank and combine candidate outputs. Standard decoding methods, such as beam search, optimize for generating high likelihood sequences rather than diverse ones, though recent work has focused on increasing diversity in these methods. In this work, we perform an extensive survey of decoding-time strategies for generating diverse outputs from a conditional language model. In addition, we present a novel method where we over-sample candidates, then use clustering to remove similar sequences, thus achieving high diversity without sacrificing quality.
3752--3762
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
fd846869e6f25d9b1a524aef8b54a08b81a1b1fa
0
Generating Fine-Grained Open Vocabulary Entity Type Descriptions
Bhowmik, Rajarshi and de Melo, Gerard
2,018
While large-scale knowledge graphs provide vast amounts of structured facts about entities, a short textual description can often be useful to succinctly characterize an entity and its type. Unfortunately, many knowledge graphs entities lack such textual descriptions. In this paper, we introduce a dynamic memory-based network that generates a short open vocabulary description of an entity by jointly leveraging induced fact embeddings as well as the dynamic context of the generated sequence of words. We demonstrate the ability of our architecture to discern relevant information for more accurate generation of type description by pitting the system against several strong baselines.
877--888
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e2468753d57300d06accc5a31479e6803c90e5d4
1
Polyglot Semantic Parsing in {API}s
Richardson, Kyle and Berant, Jonathan and Kuhn, Jonas
2,018
Traditional approaches to semantic parsing (SP) work by training individual models for each available parallel dataset of text-meaning pairs. In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages. In particular, we focus on translating text to code signature representations using the software component datasets of Richardson and Kuhn (2017b,a). The advantage of such models is that they can be used for parsing a wide variety of input natural languages and output programming languages, or mixed input languages, using a single unified model. To facilitate modeling of this type, we develop a novel graph-based decoding framework that achieves state-of-the-art performance on the above datasets, and apply this method to two other benchmark SP tasks.
720--730
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
cf8ec112520e53a3864f01a827b085c3869c55e8
0
Fine Grained Classification of Named Entities
Fleischman, Michael and Hovy, Eduard
2,002
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
198b711915429fa55162e749a0b964755b36a62e
1
How to prevent adjoining in {TAG}s and its impact on the Average Case Complexity
Woch, Jens
2,002
nan
102--107
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e0fb80262a15e457af9e0f3de46c1f35c8f8da82
0
Neural Architectures for Fine-grained Entity Type Classification
Shimaoka, Sonse and Stenetorp, Pontus and Inui, Kentaro and Riedel, Sebastian
2,017
In this work, we investigate several neural network architectures for fine-grained entity type classification and make three key contributions. Despite being a natural comparison and addition, previous work on attentive neural architectures have not considered hand-crafted features and we combine these with learnt features and establish that they complement each other. Additionally, through quantitative analysis we establish that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for our task. We introduce parameter sharing between labels through a hierarchical encoding method, that in low-dimensional projections show clear clusters for each type hierarchy. Lastly, despite using the same evaluation dataset, the literature frequently compare models trained using different data. We demonstrate that the choice of training data has a drastic impact on performance, which decreases by as much as 9.85{\%} loose micro F1 score for a previously proposed method. Despite this discrepancy, our best model achieves state-of-the-art results with 75.36{\%} loose micro F1 score on the well-established Figer (GOLD) dataset and we report the best results for models trained using publicly available data for the OntoNotes dataset with 64.93{\%} loose micro F1 score.
1271--1280
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
800dd1672789fe97513b84e65e75e370b10d6c13
1
Question Difficulty {--} How to Estimate Without Norming, How to Use for Automated Grading
Pad{\'o}, Ulrike
2,017
Question difficulty estimates guide test creation, but are too costly for small-scale testing. We empirically verify that Bloom{'}s Taxonomy, a standard tool for difficulty estimation during question creation, reliably predicts question difficulty observed after testing in a short-answer corpus. We also find that difficulty is mirrored in the amount of variation in student answers, which can be computed before grading. We show that question difficulty and its approximations are useful for \textit{automated grading}, allowing us to identify the optimal feature set for grading each question even in an unseen-question setting.
1--10
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
12f66979e1b1a7d8c2e1a4ff8a6219c19483138e
0
Learning to Bootstrap for Entity Set Expansion
Yan, Lingyong and Han, Xianpei and Sun, Le and He, Ben
2,019
Bootstrapping for Entity Set Expansion (ESE) aims at iteratively acquiring new instances of a specific target category. Traditional bootstrapping methods often suffer from two problems: 1) delayed feedback, i.e., the pattern evaluation relies on both its direct extraction quality and extraction quality in later iterations. 2) sparse supervision, i.e., only few seed entities are used as the supervision. To address the above two problems, we propose a novel bootstrapping method combining the Monte Carlo Tree Search (MCTS) algorithm with a deep similarity network, which can efficiently estimate delayed feedback for pattern evaluation and adaptively score entities given sparse supervision signals. Experimental results confirm the effectiveness of the proposed method.
292--301
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
b2057b7ae7205c3fce709d349d575683a3dc40d1
1
Evidence Sentence Extraction for Machine Reading Comprehension
Wang, Hai and Yu, Dian and Sun, Kai and Chen, Jianshu and Yu, Dong and McAllester, David and Roth, Dan
2,019
Remarkable success has been achieved in the last few years on some limited machine reading comprehension (MRC) tasks. However, it is still difficult to interpret the predictions of existing MRC models. In this paper, we focus on extracting evidence sentences that can explain or support the answers of multiple-choice MRC tasks, where the majority of answer options cannot be directly extracted from reference documents. Due to the lack of ground truth evidence sentence labels in most cases, we apply distant supervision to generate imperfect labels and then use them to train an evidence sentence extractor. To denoise the noisy labels, we apply a recently proposed deep probabilistic logic learning framework to incorporate both sentence-level and cross-sentence linguistic indicators for indirect supervision. We feed the extracted evidence sentences into existing MRC models and evaluate the end-to-end performance on three challenging multiple-choice MRC datasets: MultiRC, RACE, and DREAM, achieving comparable or better performance than the same models that take as input the full reference document. To the best of our knowledge, this is the first work extracting evidence sentences for multiple-choice MRC.
696--707
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
bb104dc51121a0f64a5327526fad449cb03dd1bb
0
Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss
Xu, Peng and Barbosa, Denilson
2,018
The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross-entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task.
16--25
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
008405f7ee96677ac23cc38be360832af2d9f437
1
Cross-Domain Sentiment Classification with Target Domain Specific Information
Peng, Minlong and Zhang, Qi and Jiang, Yu-gang and Huang, Xuanjing
2,018
The task of adopting a model with good performance to a target domain that is different from the source domain used for training has received considerable attention in sentiment analysis. Most existing approaches mainly focus on learning representations that are domain-invariant in both the source and target domains. Few of them pay attention to domain-specific information, which should also be informative. In this work, we propose a method to simultaneously extract domain specific and invariant representations and train a classifier on each of the representation, respectively. And we introduce a few target domain labeled data for learning domain-specific information. To effectively utilize the target domain labeled data, we train the domain invariant representation based classifier with both the source and target domain labeled data and train the domain-specific representation based classifier with only the target domain labeled data. These two classifiers then boost each other in a co-training style. Extensive sentiment analysis experiments demonstrated that the proposed method could achieve better performance than state-of-the-art methods.
2505--2513
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
a99122b0b96cc40982ac267ba7e99c72a1dcc2e9
0
An Attentive Neural Architecture for Fine-grained Entity Type Classification
Shimaoka, Sonse and Stenetorp, Pontus and Inui, Kentaro and Riedel, Sebastian
2,016
nan
69--74
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
5c22ff7fe5fc588e3648b5897255f151feb61fee
1
Producing Monolingual and Parallel Web Corpora at the Same Time - {S}pider{L}ing and Bitextor{'}s Love Affair
Ljube{\v{s}}i{\'c}, Nikola and Espl{\`a}-Gomis, Miquel and Toral, Antonio and Rojas, Sergio Ortiz and Klubi{\v{c}}ka, Filip
2,016
This paper presents an approach for building large monolingual corpora and, at the same time, extracting parallel data by crawling the top-level domain of a given language of interest. For gathering linguistically relevant data from top-level domains we use the SpiderLing crawler, modified to crawl data written in multiple languages. The output of this process is then fed to Bitextor, a tool for harvesting parallel data from a collection of documents. We call the system combining these two tools Spidextor, a blend of the names of its two crucial parts. We evaluate the described approach intrinsically by measuring the accuracy of the extracted bitexts from the Croatian top-level domain {``}.hr{''} and the Slovene top-level domain {``}.si{''}, and extrinsically on the English-Croatian language pair by comparing an SMT system built from the crawled data with third-party systems. We finally present parallel datasets collected with our approach for the English-Croatian, English-Finnish, English-Serbian and English-Slovene language pairs.
2949--2956
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
e5ce9182054fa811d3c65c7e98b30bf2d90af8a4
0
Assessing the Challenge of Fine-Grained Named Entity Recognition and Classification
Ekbal, Asif and Sourjikova, Eva and Frank, Anette and Ponzetto, Simone Paolo
2,010
nan
93--101
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
9538eed00e2ed9d077c88afe7492766f6855ba78
1
Workshop on Advanced Corpus Solutions
Johannessen, Janne Bondi
2,010
nan
717--719
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
a33ace522fd2a4dd26cb02a4d62eec8436e8f6b3
0
An Attentive Fine-Grained Entity Typing Model with Latent Type Representation
Lin, Ying and Ji, Heng
2,019
We propose a fine-grained entity typing model with a novel attention mechanism and a hybrid type classifier. We advance existing methods in two aspects: feature extraction and type prediction. To capture richer contextual information, we adopt contextualized word representations instead of fixed word embeddings used in previous work. In addition, we propose a two-step mention-aware attention mechanism to enable the model to focus on important words in mentions and contexts. We also present a hybrid classification method beyond binary relevance to exploit type inter-dependency with latent type representation. Instead of independently predicting each type, we predict a low-dimensional vector that encodes latent type features and reconstruct the type vector from this latent representation. Experiment results on multiple data sets show that our model significantly advances the state-of-the-art on fine-grained entity typing, obtaining up to 6.1{\%} and 5.5{\%} absolute gains in macro averaged F-score and micro averaged F-score respectively.
6197--6202
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ed3a6ff80bd9892a5d8bf6490147fcd518ebc413
1
An adaptable task-oriented dialog system for stand-alone embedded devices
Duong, Long and Hoang, Vu Cong Duy and Pham, Tuyen Quang and Hong, Yu-Heng and Dovgalecs, Vladislavs and Bashkansky, Guy and Black, Jason and Bleeker, Andrew and Huitouze, Serge Le and Johnson, Mark
2,019
This paper describes a spoken-language end-to-end task-oriented dialogue system for small embedded devices such as home appliances. While the current system implements a smart alarm clock with advanced calendar scheduling functionality, the system is designed to make it easy to port to other application domains (e.g., the dialogue component factors out domain-specific execution from domain-general actions such as requesting and updating slot values). The system does not require internet connectivity because all components, including speech recognition, natural language understanding, dialogue management, execution and text-to-speech, run locally on the embedded device (our demo uses a Raspberry Pi). This simplifies deployment, minimizes server costs and most importantly, eliminates user privacy risks. The demo video in alarm domain is here youtu.be/N3IBMGocvHU
49--57
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ea2ad7e330070aed3e909a2e263ae88e320663b3
0
Transforming {W}ikipedia into a Large-Scale Fine-Grained Entity Type Corpus
Ghaddar, Abbas and Langlais, Philippe
2,018
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8296e7f869172ac6fe34380574706f753328eda5
1
{B}in{L}in: A Simple Method of Dependency Tree Linearization
Puzikov, Yevgeniy and Gurevych, Iryna
2,018
Surface Realization Shared Task 2018 is a workshop on generating sentences from lemmatized sets of dependency triples. This paper describes the results of our participation in the challenge. We develop a data-driven pipeline system which first orders the lemmas and then conjugates the words to finish the surface realization process. Our contribution is a novel sequential method of ordering lemmas, which, despite its simplicity, achieves promising results. We demonstrate the effectiveness of the proposed approach, describe its limitations and outline ways to improve it.
13--28
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
933afbd7aea1671e6950d65511c23af7adf38de1
0
A Joint Model for Entity Analysis: Coreference, Typing, and Linking
Durrett, Greg and Klein, Dan
2,014
We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities). Our model is formally a structured conditional random field. Unary factors encode local features from strong baselines for each task. We then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. On the ACE 2005 and OntoNotes datasets, we achieve state-of-the-art results for all three tasks. Moreover, joint modeling improves performance on each task over strong independent baselines.
477--490
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
28eb033eee5f51c5e5389cbb6b777779203a6778
1
Transfer learning of feedback head expressions in {D}anish and {P}olish comparable multimodal corpora
Navarretta, Costanza and Lis, Magdalena
2,014
The paper is an investigation of the reusability of the annotations of head movements in a corpus in a language to predict the feedback functions of head movements in a comparable corpus in another language. The two corpora consist of naturally occurring triadic conversations in Danish and Polish, which were annotated according to the same scheme. The intersection of common annotation features was used in the experiments. A Na{\"\i}ve Bayes classifier was trained on the annotations of a corpus and tested on the annotations of the other corpus. Training and test datasets were then reversed and the experiments repeated. The results show that the classifier identifies more feedback behaviours than the majority baseline in both cases and the improvements are significant. The performance of the classifier decreases significantly compared with the results obtained when training and test data belong to the same corpus. Annotating multimodal data is resource consuming, thus the results are promising. However, they also confirm preceding studies that have identified both similarities and differences in the use of feedback head movements in different languages. Since our datasets are small and only regard a communicative behaviour in two languages, the experiments should be tested on more data types.
3597--3603
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
afd1491f8a07ce7a4388e0ab40bf4eb7c06333a8
0
{W}iki{C}oref: An {E}nglish Coreference-annotated Corpus of {W}ikipedia Articles
Ghaddar, Abbas and Langlais, Phillippe
2,016
This paper presents WikiCoref, an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Our annotation scheme follows the one of OntoNotes with a few disparities. We annotated each markable with coreference type, mention type and the equivalent Freebase topic. Since most similar annotation efforts concentrate on very specific types of written text, mainly newswire, there is a lack of resources for otherwise over-used Wikipedia texts. The corpus described in this paper addresses this issue. We present a freely available resource we initially devised for improving coreference resolution algorithms dedicated to Wikipedia texts. Our corpus has no restriction on the topics of the documents being annotated, and documents of various sizes have been considered for annotation.
136--142
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
fe5cffa25cb2ab412da9f19ea7e9656ac72d454c
1
Improving Reliability of Word Similarity Evaluation by Redesigning Annotation Task and Performance Measure
Avraham, Oded and Goldberg, Yoav
2,016
nan
106--110
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
1c7174bd2b01831920217088e2b48cb151691110
0

Dataset Card for "anthology"

More Information needed

Downloads last month
46
Edit dataset card

Models trained or fine-tuned on cestwc/anthology