id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
e264c45391853fb008c838aa7ccca8_6
Our method of acquiring hyponymy relations is an extension of the supervised method proposed by <cite>Sumida and Torisawa (2008)</cite> , but differs in the way of enumerating hyponymy relation candidates (hereafter, HRCs) from the hierarchical layouts, and in the features of machine learning.
extends differences
e264c45391853fb008c838aa7ccca8_7
We obtain HRCs by considering the title of each marked-up item as a hypernym candidate, and titles of its all subordinate marked-up items as its hyponym candidates; for example, we extract 'England', 'France', 'Wedgwood', 'Lipton', and 'Fauchon' as hyponym candidates of 'Common tea brands' from the hierarchical structure in Figure 1 . Note that <cite>Sumida and Torisawa (2008)</cite> extracted HRCs by regarding the title of each marked-up item as a hypernym candidate and titles of its direct subordinate marked-up items as its hyponyms; for example, they extracted only 'England' and 'France' as hyponym candidates of 'Common tea brands' from the hierarchical structure in Figure 1 .
differences
e264c45391853fb008c838aa7ccca8_8
In what follows, we briefly review the features proposed by <cite>Sumida and Torisawa (2008)</cite> , and then explain the novel features introduced in this study.
uses
e264c45391853fb008c838aa7ccca8_9
We expect that the readers will refer to the literature<cite> (Sumida and Torisawa, 2008)</cite> to see the effect of the features proposed by Sumida and Torisawa.
background
e264c45391853fb008c838aa7ccca8_10
ATTR Using the attribute set created by <cite>Sumida and Torisawa (2008)</cite> , when a hypernym/hyponym is included as an element of the attribute set, we set a feature corresponding to the element to 1.
uses
e264c45391853fb008c838aa7ccca8_11
This reflects Sumida and Torisawa's observation that HRCs whose hypernym matches the patterns are likely to be correct<cite> (Sumida and Torisawa, 2008)</cite> .
similarities
e264c45391853fb008c838aa7ccca8_12
The row titled 'S & T (2008) ' shows the performance of the method proposed by <cite>Sumida and Torisawa (2008)</cite> .
uses
e264c45391853fb008c838aa7ccca8_13
We successfully obtained more than 1.73 million hyponymy relations with 85.2% precision, which greatly outperformed the results of <cite>Sumida and Torisawa (2008)</cite> in terms of both the precision and the number of acquired hyponymy relations.
differences
e29c7551ea78cb425054963489e1b9_0
<cite>Salesky et al. (2018)</cite> introduced a set of fluent references 1 for Fisher Spanish-English, enabling a new task: end-to-end training and evaluation against fluent references.
background
e29c7551ea78cb425054963489e1b9_1
Further, corpora can have different translation and annotation schemes: for example for Fisher Spanish-English, translated using Mechanical Turk, <cite>Salesky et al. (2018)</cite> found 268 unique filler words due to spelling and casing.
background
e29c7551ea78cb425054963489e1b9_2
For our experiments, we use Fisher Spanish speech (Graff et al.) and with two sets of English translations<cite> (Salesky et al., 2018</cite>; Post et al., 2013) .
uses
e29c7551ea78cb425054963489e1b9_3
<cite>Salesky et al. (2018)</cite> introduced a new set of fluent reference translations collected on Mechanical Turk.
background
e29c7551ea78cb425054963489e1b9_4
Using clean references for disfluent data collected by <cite>Salesky et al. (2018)</cite> , we extend their text baseline to speech input and provide first results for direct generation of fluent text from noisy disfluent speech.
extends
e3ee86bbaca6ae00906e7ec64f0ac0_0
Zhiyuan Chen <cite>[2]</cite> ever proposed a approach to determine which domain dose a word have the sentiment orientation to achieve the goal of lifelong learning.
background
e3ee86bbaca6ae00906e7ec64f0ac0_1
Zhiyuan Chen <cite>[2]</cite> ever proposed a approach to determine which domain dose a word have the sentiment orientation to achieve the goal of lifelong learning. He made a big progress but the supervised learning still is needed.
motivation
e3ee86bbaca6ae00906e7ec64f0ac0_2
Zhiyuan and Bing <cite>[2]</cite> improved the sentiment classification by involving knowledge.
background
e3ee86bbaca6ae00906e7ec64f0ac0_3
• Knowledge Base (KB): The knowledge Base <cite>[2]</cite> mainly used to maintain the previous knowledge.
background
e3ee86bbaca6ae00906e7ec64f0ac0_4
• Knowledge-Base Learner (KBL): The Knowledge-Based Learner <cite>[2]</cite> aims to retrieve and transfer previous knowledge to the current task.
background
e3ee86bbaca6ae00906e7ec64f0ac0_5
Previous classical paper <cite>[2]</cite> chose the sentiment classification as the learning target because it is could be regarded as a task as well as a group of subtasks in different domain.
background
e3ee86bbaca6ae00906e7ec64f0ac0_6
Previous classical paper <cite>[2]</cite> chose the sentiment classification as the learning target because it is could be regarded as a task as well as a group of subtasks in different domain. These sub-tasks related to each other but a model trained on a domain is unable to perform well in rest domains.
motivation
e3ee86bbaca6ae00906e7ec64f0ac0_7
"Lifelong Sentiment Classification" ("LSC" for simple below) <cite>[2]</cite> records that which domain does a word have the sentiment orientation.
background
e3ee86bbaca6ae00906e7ec64f0ac0_8
Although LSC <cite>[2]</cite> already raised a lifelong approach, it only aims to improve the classification accuracy.
motivation
e3ee86bbaca6ae00906e7ec64f0ac0_9
Use use the same formula as LSC <cite>[2]</cite> used below.
uses
e3ee86bbaca6ae00906e7ec64f0ac0_10
LSC <cite>[2]</cite> discussed a possible solution of P(w |c j ).
background
e3ee86bbaca6ae00906e7ec64f0ac0_11
Although LSC <cite>[2]</cite> considered the difference among domains, it still is a typical supervised learning approach.
motivation
e3ee86bbaca6ae00906e7ec64f0ac0_12
In the experiment, we use the same datasets as LSC <cite>[2]</cite> used.
uses
e4452ce844b74c35f257c916aae120_0
However, recently the reverse process, i.e. the generation of texts from AMRs, has started to receive scholarly attention (Flanigan et al., 2016; Song et al., 2016; <cite>Pourdamghani et al., 2016</cite>; Song et al., 2017; Konstas et al., 2017) .
background
e4452ce844b74c35f257c916aae120_1
However, recently the reverse process, i.e. the generation of texts from AMRs, has started to receive scholarly attention (Flanigan et al., 2016; Song et al., 2016; <cite>Pourdamghani et al., 2016</cite>; Song et al., 2017; Konstas et al., 2017) . We assume that in practical applications, conceptualisation models or dialogue managers (models which decide "what to say") output AMRs. In this paper we study different ways in which these AMRs can be converted into natural language (deciding "how to say it").
motivation
e4452ce844b74c35f257c916aae120_2
Motivated by this similarity,<cite> Pourdamghani et al. (2016)</cite> proposed an AMR-to-text method that organises some of these concepts and edges in a flat representation, commonly known as Linearisation. Once the linearisation is complete,<cite> Pourdamghani et al. (2016)</cite> map the flat AMR into an English sentence using a Phrase-Based Machine Translation (PBMT) system.
background
e4452ce844b74c35f257c916aae120_3
In addition,<cite> Pourdamghani et al. (2016)</cite> use PBMT, which is devised for translation but also utilised in other NLP tasks, e.g. text simplification (Wubben et al., 2012; Štajner et al., 2015) .
background
e4452ce844b74c35f257c916aae120_4
To address this,<cite> Pourdamghani et al. (2016)</cite> look for special realisation component for names, dates and numbers in development and test sets and add them on the training set.
background
e4452ce844b74c35f257c916aae120_5
Following the aligner of Pourdamghani et al. (2014) ,<cite> Pourdamghani et al. (2016)</cite> clean an AMR by removing some nodes and edges independent of the context.
background
e4452ce844b74c35f257c916aae120_6
Following the aligner of Pourdamghani et al. (2014) ,<cite> Pourdamghani et al. (2016)</cite> clean an AMR by removing some nodes and edges independent of the context. Instead, we are using alignments that may relate a given node or edge to an English word according to the context.
differences
e4452ce844b74c35f257c916aae120_7
After Compression, we flatten the AMR to serve as input to the translation step, similarly as proposed in<cite> Pourdamghani et al. (2016)</cite> .
similarities
e4452ce844b74c35f257c916aae120_8
In a second step, also following<cite> Pourdamghani et al. (2016)</cite>, we implemented a version of the 2-Step Classifier from Lerner and Petrov (2013) to preorder the elements from an AMR according to the target side.
uses
e4452ce844b74c35f257c916aae120_9
We compare BLEU scores for some of the AMRto-text systems described in the literature (Flanigan et al., 2016; Song et al., 2016; <cite>Pourdamghani et al., 2016</cite>; Song et al., 2017; Konstas et al., 2017) .
uses
e4452ce844b74c35f257c916aae120_10
Since the models of Flanigan et al. (2016) and<cite> Pourdamghani et al. (2016)</cite> are publicly available, we also use them with the same training data as our models.
uses
e4452ce844b74c35f257c916aae120_11
We compare BLEU scores for some of the AMRto-text systems described in the literature (Flanigan et al., 2016; Song et al., 2016; <cite>Pourdamghani et al., 2016</cite>; Song et al., 2017; Konstas et al., 2017) . Since the models of Flanigan et al. (2016) and<cite> Pourdamghani et al. (2016)</cite> are publicly available, we also use them with the same training data as our models. For Flanigan et al. (2016) , we specifically use the version available on GitHub 2 . For<cite> Pourdamghani et al. (2016)</cite> , we use the version available at the first author's website 3 .
uses
e4452ce844b74c35f257c916aae120_12
We do not use lexicalised reordering models as<cite> Pourdamghani et al. (2016)</cite> . Moreover, we tune the weights of the feature functions with MERT (Och, 2003) .
differences
e4452ce844b74c35f257c916aae120_13
The models of Flanigan et al. (2016) and<cite> Pourdamghani et al. (2016)</cite> were officially trained with 10,313 AMR-sentence pairs from the LDC2014T12 corpus, and with 36,521 AMR-sentence pairs from the LDC2016E25 in our study (as our models).
differences
e4452ce844b74c35f257c916aae120_14
All the models with the preordering method in Linearisation Song et al. (2017) and introduce competitive results with<cite> Pourdamghani et al. (2016)</cite> .
similarities
e4452ce844b74c35f257c916aae120_15
Our best model (PBMT-Delex+Compress+Preorder) presents competitive results to<cite> Pourdamghani et al. (2016)</cite> with the advantage that no technique is necessary to overcome data sparsity.
similarities
e4452ce844b74c35f257c916aae120_16
We note that the preordering success was expected, based on previous results<cite> (Pourdamghani et al., 2016)</cite> .
uses
e4452ce844b74c35f257c916aae120_17
PBMT models trained on small data sets clearly outperform NMT ones, e.g. Konstas et al. (2017) reported 22.0 BLEU, whereas<cite> Pourdamghani et al. (2016)</cite> 's best model achieved 26.9 BLEU, and our best model performs comparably (26.8 BLEU).
similarities
e4452ce844b74c35f257c916aae120_18
In such situations, our PBMT models, like<cite> Pourdamghani et al. (2016)</cite> , look appear to be a good alternative option.
similarities
e48a1eac39987cb2f504b66d135572_0
Texts from biomedical publications and electronic medical record have been used to pre-train BERT models for NLP task in this domain and showed considerable improvement in many downstream tasks (Lee et al., 2019;<cite> Alsentzer et al., 2019</cite>; Si et al., 2019) .
motivation background
e48a1eac39987cb2f504b66d135572_1
Later in the year,<cite> Alsentzer et al. (2019)</cite> and Si et al. (2019) published almost at the same time BERT models pre-trained trained on publicly available clinical notes from MIMIC3 either starting from trained parameters of original BERT or BioBERT model and show improvement of clinical NLP tasks.
background
e48a1eac39987cb2f504b66d135572_2
We included only discharge summaries in our study as previous studies have shown that performance of a model trained on only discharge summaries in this corpus is only marginally worse than model trained on all notes types<cite> (Alsentzer et al., 2019)</cite> .
similarities
e48a1eac39987cb2f504b66d135572_3
The prepossessing and tokenization pipeline from<cite> Alsentzer et al. 2019</cite><cite> (Alsentzer et al., 2019</cite> was adapted.
similarities uses
e48a1eac39987cb2f504b66d135572_4
Centralized fine-tuning of the i2b2 NER tasks plateaued after 4 epochs, with the learning rate set at 2e − 5 and a batch size of 32<cite> (Alsentzer et al., 2019)</cite> .
background
e48a1eac39987cb2f504b66d135572_5
Clinical BERT pre-trained on MIMIC corpus has been reported to have superior performance on NER tasks in Inside-Outside-Beginning (IOB) format (Ramshaw and Marcus, 1999) using i2b2 2010 (Uzuner et al., 2011) and 2012 (Sun et al., 2013) data<cite> (Alsentzer et al., 2019)</cite> .Original training/development/test splits in the challenges were used.
background
e48a1eac39987cb2f504b66d135572_6
We also looked at the scenarios where BERTbase model was pre-trained by MIMIC3 discharge summaries in a centralized manner<cite> (Alsentzer et al., 2019)</cite> .
similarities
e5886e138ce8d84a48e44db3f3d6a1_0
In another attempt, <cite>Jana and Goyal (2018b)</cite> proposed various complex network measures which can be used as features to build a supervised classifier model for co-hyponymy detection, and showed improvements over other baseline approaches.
motivation
e5886e138ce8d84a48e44db3f3d6a1_1
Thus, a natural question arises as to whether network embeddings should be more effective than the handcrafted network features used by <cite>Jana and Goyal (2018b)</cite> for cohyponymy detection.
motivation
e5886e138ce8d84a48e44db3f3d6a1_2
In a recent work, <cite>Jana and Goyal (2018b)</cite> used network features extracted from the DT to detect co-hyponyms. In our approach, we attempt to use embeddings obtained through a network representation learning method such as node2vec (Grover and Leskovec, 2016) when applied over the DT network.
extends
e5886e138ce8d84a48e44db3f3d6a1_3
We perform experiments using three benchmark datasets for co-hyponymy detection (Weeds et al., 2014; Santus et al., 2016; <cite>Jana and Goyal, 2018b)</cite> .
uses
e5886e138ce8d84a48e44db3f3d6a1_4
In a recent work, <cite>Jana and Goyal (2018b)</cite> used network features extracted from the DT to detect co-hyponyms.
uses
e5886e138ce8d84a48e44db3f3d6a1_5
Evaluation results: We evaluate the usefulness of DT embeddings against three benchmark datasets for cohyponymy detection (Weeds et al., 2014; Santus et al., 2016; <cite>Jana and Goyal, 2018b)</cite> , following their experimental setup.
uses
e5886e138ce8d84a48e44db3f3d6a1_6
(Ferret, 2017) apply distributional thesaurus embedding for synonym extraction and expansion tasks whereas Jana and Goyal (2018a) use it to improve the state-of-theart performance of word similarity/relatedness tasks, word analogy task etc. Thus, a natural question arises as to whether network embeddings should be more effective than the handcrafted network features used by <cite>Jana and Goyal (2018b)</cite> for cohyponymy detection. Being motivated by this connection, we investigate how the information captured by network representation learning methodologies on distributional thesaurus can be used in discriminating word pairs having co-hyponymy relation from the word pairs having hypernymy, meronymy relation or any random pair of words. Evaluation results: We evaluate the usefulness of DT embeddings against three benchmark datasets for cohyponymy detection (Weeds et al., 2014; Santus et al., 2016; <cite>Jana and Goyal, 2018b)</cite> , following their experimental setup.
extends
e5886e138ce8d84a48e44db3f3d6a1_7
We perform experiments using three benchmark datasets for co-hyponymy detection (Weeds et al., 2014; Santus et al., 2016; <cite>Jana and Goyal, 2018b)</cite> . For <cite>each of these</cite>, we follow the same experimental setup as discussed by the authors and compare our method with the method proposed by the author as well as the state-of-the-art models by <cite>Jana and Goyal (2018b)</cite> . We perform the analysis of three datasets to investigate the extent of overlap present in these publicly available benchmark datasets and find out that 45.7% word pairs of dataset prepared by Weeds et al. (2014) are present in dataset ROOT9 prepared by Santus et al. (2016) . This intersection set comprises 27.8% of the ROOT9 dataset. Similarly 36.7% word pairs of dataset prepared by Weeds et al. (2014) are present in the whole dataset prepared by <cite>Jana and Goyal (2018b)</cite> . This intersection set comprises 44.9% of the dataset prepared by <cite>Jana and Goyal (2018b)</cite> .
uses
e5886e138ce8d84a48e44db3f3d6a1_8
The relation between word pair holds if the lin similarity (Lin, 1998) of the word vectors is greater than some threshold p Table 3 : Accuracy scores on a ten-fold cross validation for cohyponym BLESS dataset of our models along with the top two baseline models (one supervised, one semisupervised) described in (Weeds et al., 2014) and models described in <cite>(Jana and Goyal, 2018b )</cite>
uses
e5886e138ce8d84a48e44db3f3d6a1_9
Table 5 : Accuracy scores on a ten-fold cross validation of models (svmSS, rfALL) proposed by <cite>Jana and Goyal (2018b)</cite> and our models for the dataset prepared by <cite>Jana and Goyal (2018b)</cite> .
uses
e5886e138ce8d84a48e44db3f3d6a1_11
Co-Hyp vs Hyper (Santus et al., 2016) 97.8 95.7 <cite>(Jana and Goyal, 2018b)</cite> 99 Table 4 : Percentage F1 scores on a ten-fold cross validation of our models along with the best models described in (Santus et al., 2016) and <cite>(Jana and Goyal, 2018b)</cite> for ROOT9 dataset a set of baseline methodologies, the descriptions of which are presented in Table 1 .
uses
e5886e138ce8d84a48e44db3f3d6a1_12
Co-Hyp vs Hyper (Santus et al., 2016) 97.8 95.7 <cite>(Jana and Goyal, 2018b)</cite> 99 Table 4 : Percentage F1 scores on a ten-fold cross validation of our models along with the best models described in (Santus et al., 2016) and <cite>(Jana and Goyal, 2018b)</cite> for ROOT9 dataset a set of baseline methodologies, the descriptions of which are presented in Table 1 . Here, the best model proposed by <cite>Jana and Goyal (2018b)</cite> uses SVM classifier which is fed with structural similarity of the words in the given word pair from the distributional thesaurus network.
uses
e5886e138ce8d84a48e44db3f3d6a1_13
Here, the best model proposed by <cite>Jana and Goyal (2018b)</cite> uses SVM classifier which is fed with structural similarity of the words in the given word pair from the distributional thesaurus network. We see that all the 4 proposed methods perform at par or better than the baselines, and using RF CC gives a 15.4% improvement over the best results reported.
differences
e5886e138ce8d84a48e44db3f3d6a1_14
Table 4 represents the performance comparison of our models with the best stateof-the-art models reported in (Santus et al., 2016) and <cite>(Jana and Goyal, 2018b)</cite> .
uses
e5886e138ce8d84a48e44db3f3d6a1_15
Here, the best model proposed by Santus et al. (2016) uses Random Forest classifier which is fed with nine corpus based features like frequency of words, co-occurrence frequency etc., and the best model proposed by <cite>Jana and Goyal (2018b)</cite> use Random Forest classifier which is fed with five complex network features like structural similarity, shortest path etc.
uses
e5886e138ce8d84a48e44db3f3d6a1_16
In the third experiment we use the dataset specifically build for co-hyponymy detection in one of the recent works by <cite>Jana and Goyal (2018b)</cite> . <cite>This dataset</cite> is extracted from BLESS (Baroni and Lenci, 2011) and divided into three small datasets-Co-Hypo vs Hyper, Co-Hypo vs Mero, Co-Hypo Vs Random.
uses
e5886e138ce8d84a48e44db3f3d6a1_17
Following the same setup, we report accuracy scores for ten-fold cross validation for each of these three datasets of our models along with the best models (svmSS, rfALL) reported by <cite>Jana and Goyal (2018b)</cite> in Table 5 .
uses
e5886e138ce8d84a48e44db3f3d6a1_18
<cite>Jana and Goyal (2018b)</cite> use SVM classifier with structural similarity between words in a word pair as feature to obtain svmSS and use Random Forest classifier with five complex network measures computed from distributional thesaurus network as features to obtain rfALL.
background
e5ef75cd497dd94b4cf818291707df_0
Second, although the feature set is fundamentally a combination of those used in previous works <cite>(Zhang and Clark, 2010</cite>; Huang and Sagae, 2010) , to integrate them in a single incremental framework is not straightforward.
uses
e5ef75cd497dd94b4cf818291707df_1
Among the many recent works on joint segmentation and POS tagging for Chinese, the linear-time incremental models by Zhang and Clark (2008) and<cite> Zhang and Clark (2010)</cite> largely inspired our model.
similarities uses
e5ef75cd497dd94b4cf818291707df_2
More recently,<cite> Zhang and Clark (2010)</cite> proposed an efficient character-based decoder for their word-based model.
background
e5ef75cd497dd94b4cf818291707df_3
Particularly, we change the role of the shift action and additionally use the append action, inspired by the character-based actions used in the joint segmentation and POS tagging model by<cite> Zhang and Clark (2010)</cite> .
uses
e5ef75cd497dd94b4cf818291707df_4
Following<cite> Zhang and Clark (2010)</cite> , the POS tag is assigned to the word when its first character is shifted, and the word-tag pairs observed in the training data and the closed-set tags (Xia, 2000) are used to prune unlikely derivations.
uses
e5ef75cd497dd94b4cf818291707df_5
We can first think of using the number of shifted characters as the step index, as<cite> Zhang and Clark (2010)</cite> does.
uses
e5ef75cd497dd94b4cf818291707df_6
In our framework, because an action increases the step index by 1 (for SH(t) or RL/RR) or 2 (for A), we need to use two beams to store new states at each step. Theoretically, the computational time is greater than that with the character-based joint segmentation and tagging model by<cite> Zhang and Clark (2010)</cite> by a factor of 2.1, when the same beam size is used.
differences
e5ef75cd497dd94b4cf818291707df_7
The feature set of our model is fundamentally a combination of the features used in the state-of-the-art joint segmentation and POS tagging model<cite> (Zhang and Clark, 2010)</cite> and dependency parser (Huang and Sagae, 2010) , both of which are used as baseline models in our experiment.
uses
e5ef75cd497dd94b4cf818291707df_8
The list of the features used in our joint model is presented in Table 1 , where S01-S05, W01-W21, and T01-05 are taken from<cite> Zhang and Clark (2010)</cite> , and P01-P28 are taken from Huang and Sagae (2010) .
uses
e5ef75cd497dd94b4cf818291707df_9
These features will also be used in our reimplementation of the model by<cite> Zhang and Clark (2010)</cite> .
uses
e5ef75cd497dd94b4cf818291707df_10
We use the following baseline and proposed models for evaluation. • SegTag: our reimplementation of the joint segmentation and POS tagging model by<cite> Zhang and Clark (2010)</cite> .
uses
e5ef75cd497dd94b4cf818291707df_11
We use the following baseline and proposed models for evaluation. tagging (Zhang and Clark, 2008;<cite> Zhang and Clark, 2010)</cite> and dependency parsing (Huang and Sagae, 2010) .
uses
e5ef75cd497dd94b4cf818291707df_12
Table 5 and Table 6 show a comparison of the segmentation and POS tagging accuracies with other state-of-the-art models. "Zhang '10" is the incremental model by<cite> Zhang and Clark (2010)</cite> .
uses
e608068f472e7045b682f979fd5295_0
Another closely related work is our own previously proposed method for leveraging on resources available for English to construct resources for a second language<cite> (Mihalcea et al., 2007)</cite> . That method assumed the availability of a bridge between languages, such as a bilingual lexicon or a parallel corpus. Instead, in the method proposed here, we rely exclusively on language-specific resources, and do not make use of any such bilingual resources which may not always be available.
motivation background differences
e608068f472e7045b682f979fd5295_1
More details about this data set are available in<cite> (Mihalcea et al., 2007)</cite> .
uses background
e608068f472e7045b682f979fd5295_3
Note that<cite> (Mihalcea et al., 2007</cite> ) also proposed a corpusbased method for subjectivity classification; however that method is supervised and thus not directly comparable with the approach introduced in this paper.
differences
e68d09937d522dc5acac9637eb2a8b_0
Two baseNP data sets have been put forward by <cite>(Ramshaw and Marcus, 1995)</cite> .
background
e68d09937d522dc5acac9637eb2a8b_1
A standard data set for this task was put forward at the CoNLL-99 workshop. The noun phrases in this data set are the same as in the Treebank and therefore the baseNPs in this data set are slightly different from the ones in the <cite>(Ramshaw and Marcus, 1995)</cite> data sets.
background
e68d09937d522dc5acac9637eb2a8b_2
And third, 1This <cite>(Ramshaw and Marcus, 1995)</cite> baseNP data set is available via ftp://ftp.cis.upenn.edu/pub/chunker/ 2Software for generating the data is available from http://lcg-www.uia.ac.be/conl199/npb/ with the FZ=I rate which is equal to (2*precision*recall)/(precision+recall).
background
e68d09937d522dc5acac9637eb2a8b_3
An alternative representation for baseNPs has been put forward by <cite>(Ramshaw and Marcus, 1995)</cite> .
background
e68d09937d522dc5acac9637eb2a8b_4
(Tjong Kim Sang and Veenstra, 1999) have presented three variants of this tagging representation. They have used the <cite>(Ramshaw and Marcus, 1995)</cite> representation as well (IOB1).
background
e68d09937d522dc5acac9637eb2a8b_5
We have applied it to the two data sets mentioned in <cite>(Ramshaw and Marcus, 1995)</cite> .
uses
e68d09937d522dc5acac9637eb2a8b_6
For this purpose we performed a 10-fold cross validation experiment on the baseNP training data, sections 15-18 of the WSJ part of the Penn Treebank (211727 tokens). Like the data used by <cite>(Ramshaw and Marcus, 1995)</cite> , this data was retagged by the Brill tagger in order to obtain realistic part-of-speech (POS) tags 3. The data was segmented into baseNP parts and non-baseNP parts in a similar fashion as the data used by <cite>(Ramshaw and Marcus, 1995)</cite> .
similarities uses
e68d09937d522dc5acac9637eb2a8b_7
<cite>(Ramshaw and Marcus, 1995)</cite> have build a chunker by applying transformation-based learning to sections of the Penn Treebank.
background
e68d09937d522dc5acac9637eb2a8b_8
They compare two data representations and report that a representation with bracket structures outperforms the IOB tagging representation introduced by <cite>(Ramshaw and Marcus, 1995)</cite> .
differences
e7c947a02bb0e81d6b6b4b9da74024_0
<cite>Bolukbasi et al. (2016b)</cite> show that using word embeddings for simple analogies surfaces many gender stereotypes.
background
e7c947a02bb0e81d6b6b4b9da74024_1
Recently, some work has been done to reduce the gender bias in word embeddings, both as a post-processing step<cite> (Bolukbasi et al., 2016b)</cite> and as part of the training procedure (Zhao et al., 2018) .
background
e7c947a02bb0e81d6b6b4b9da74024_2
<cite>Bolukbasi et al. (2016b)</cite> define the gender bias of a word w by its projection on the "gender direction": − → w · ( − → he − −→ she), assuming all vectors are normalized.
background
e7c947a02bb0e81d6b6b4b9da74024_3
Both <cite>Bolukbasi et al. (2016b)</cite> and Zhao et al. (2018) propose methods for debiasing word embeddings, substantially reducing the bias according to the suggested definition.
background
e7c947a02bb0e81d6b6b4b9da74024_4
2 In a seminal work, <cite>Bolukbasi et al. (2016b)</cite> use a post-processing debiasing method.
background