Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W11-0104",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:38:12.743428Z"
},
"title": "Word Sense Disambiguation with Multilingual Features",
"authors": [
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Texas",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Texas",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper explores the role played by a multilingual feature representation for the task of word sense disambiguation. We translate the context of an ambiguous word in multiple languages, and show through experiments on standard datasets that by using a multilingual vector space we can obtain error rate reductions of up to 25%, as compared to a monolingual classifier.",
"pdf_parse": {
"paper_id": "W11-0104",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper explores the role played by a multilingual feature representation for the task of word sense disambiguation. We translate the context of an ambiguous word in multiple languages, and show through experiments on standard datasets that by using a multilingual vector space we can obtain error rate reductions of up to 25%, as compared to a monolingual classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Ambiguity is inherent to human language. In particular, word sense ambiguity is prevalent in all natural languages, with a large number of the words in any given language carrying more than one meaning. For instance, the English noun plant can mean green plant or factory; similarly the French word feuille can mean leaf or paper. The correct sense of an ambiguous word can be selected based on the context where it occurs, and correspondingly the problem of word sense disambiguation is defined as the task of automatically assigning the most appropriate meaning to a polysemous word within a given context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Among the various knowledge-based (Lesk, 1986; Mihalcea et al., 2004) and data-driven (Yarowsky, 1995; Ng and Lee, 1996) word sense disambiguation methods that have been proposed to date, supervised systems have been constantly observed as leading to the highest performance. In these systems, the sense disambiguation problem is formulated as a supervised learning task, where each sense-tagged occurrence of a particular word is transformed into a feature vector which is then used in an automatic learning process. One of the main drawbacks associated with these methods is the fact that their performance is closely connected to the amount of labeled data available at hand.",
"cite_spans": [
{
"start": 34,
"end": 46,
"text": "(Lesk, 1986;",
"ref_id": "BIBREF6"
},
{
"start": 47,
"end": 69,
"text": "Mihalcea et al., 2004)",
"ref_id": "BIBREF10"
},
{
"start": 86,
"end": 102,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF15"
},
{
"start": 103,
"end": 120,
"text": "Ng and Lee, 1996)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate a new supervised word sense disambiguation method that is able to take additional advantage of the sense-labeled examples by exploiting the information that can be obtained from a multilingual representation. We show that by representing the features in a multilingual space, we are able to improve the performance of a word sense disambiguation system by a significant margin, as compared to a traditional system that uses only monolingual features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the large number of word sense disambiguation methods that have been proposed so far, targeting the resolution of word ambiguity in different languages, there are only a few methods that try to explore more than one language at a time. The work that is perhaps most closely related to ours is the bilingual bootstrapping method introduced in (Li and Li, 2002) , where word translations are automatically disambiguated using information iteratively drawn from two languages. Unlike that approach, which iterates between two languages to select the correct translation for a given target word, in our method we simultaneously use the features extracted from several languages. In fact, our method can handle more than two languages at a time, and we show that the accuracy of the disambiguation algorithm increases with the number of languages used.",
"cite_spans": [
{
"start": 350,
"end": 367,
"text": "(Li and Li, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There have also been a number of attempts to exploit parallel corpora for word sense disambiguation (Resnik and Yarowsky, 1999; Diab and Resnik, 2002; Ng et al., 2003) , but in that line of work the parallel texts were mainly used as a way to induce word senses or to create sense-tagged corpora, rather than as a source of additional multilingual views for the disambiguation features. Another related technique is concerned with the selection of correct word senses in context using large corpora in a second language (Dagan and Itai, 1994) , but as before, the additional language is used to help distinguishing between the word senses in the original language, and not as a source of additional information for the disambiguation context.",
"cite_spans": [
{
"start": 100,
"end": 127,
"text": "(Resnik and Yarowsky, 1999;",
"ref_id": "BIBREF14"
},
{
"start": 128,
"end": 150,
"text": "Diab and Resnik, 2002;",
"ref_id": "BIBREF2"
},
{
"start": 151,
"end": 167,
"text": "Ng et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 520,
"end": 542,
"text": "(Dagan and Itai, 1994)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Also related is the recent SEMEVAL task that has been proposed for cross-lingual lexical substitution, where the word sense disambiguation task was more flexibly formulated as the identification of crosslingual lexical substitutes in context (Mihalcea et al., 2010) . A number of different approaches have been proposed by the teams participating in the task, and although several of them involved the translation of contexts or substitutes from one language to another, none of them attempted to make simultaneous use of the information available in the two languages.",
"cite_spans": [
{
"start": 242,
"end": 265,
"text": "(Mihalcea et al., 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, although the multilingual subjectivity classifier proposed in Banea et al. (2010) is not directly applicable to the disambiguation task we address in this paper, their findings are similar to ours. In that paper, the authors showed how a natural language task can benefit from the use of features drawn from multiple languages, thus supporting the hypothesis that multilingual features can be effectively used to improve the accuracy of a monolingual classifier.",
"cite_spans": [
{
"start": 71,
"end": 90,
"text": "Banea et al. (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work seeks to explore the expansion of a monolingual feature set with features drawn from multiple languages in order to generate a more robust and more effective vector-space representation that can be used for the task of word sense disambiguation. While traditional monolingual representations allow a supervised learning systems to achieve a certain accuracy, we try to surpass this limitation by infusing additional information in the model, mainly in the form of features extracted from the machine translated view of the monolingual data. A statistical machine translation (MT) engine does not only provide a dictionary-based translation of the words surrounding a given ambiguous word, but it also encodes the translation knowledge derived from very large parallel corpora, thus accounting for the contextual dependencies between the words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "In order to better explain why a multilingual vector space provides for a better representation for the word sense disambiguation task, consider the following examples centered around the ambiguous verb build. 1 For illustration purposes, we only show examples for four out of the ten possible meanings in WordNet (Fellbaum, 1998) , and we only show the translations in one language (French). All the translations are performed using the Google Translate engine.",
"cite_spans": [
{
"start": 314,
"end": 330,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "En 1: Telegraph Co. said it will spend $20 million to build a factory in Guadalajara, Mexico, to make telephone answering machines. (sense id 1) Fr 1: Telegraph Co. a annonc\u00e9 qu'il d\u00e9pensera 20 millions de dollars pour construire une usine\u00e1 Guadalajara, au Mexique, pour faire r\u00e9pondeurs t\u00e9l\u00e9phoniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "En 2: A member in the House leadership and skilled legislator, Mr. Fazio nonetheless found himself burdened not only by California's needs but by Hurricane Hugo amendments he accepted in a vain effort to build support in the panel. (sense id 3) Fr 2: Un membre de la direction de la Chambre et le l\u00e9gislateur comp\u00e9tent, M. Fazio a n\u00e9anmoins conclu lui-m\u00eame souffre, non seulement par les besoins de la Californie, mais par l'ouragan Hugo amendements qu'il a accept\u00e9 dans un vain effort pour renforcer le soutien dans le panneau.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "En 3: Burmah Oil PLC, a British independent oil and specialty-chemicals marketing concern, said SHV Holdings N.V. has built up a 7.5% stake in the company. (sense id 3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "Fr 3: Burmah Oil PLC, une huile ind\u00e9pendant britannique et le souci de commercialisation des produits chimiques de sp\u00e9cialit\u00e9, a d\u00e9clar\u00e9 SHV Holdings NV a acquis une participation de 7,5% dans la soci\u00e9t\u00e9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "En 4: Plaintiffs' lawyers say that buildings become \"sick\" when inadequate fresh air and poor ventilation systems lead pollutants to build up inside. (sense id 2) Fr 4: Avocats des plaignants disent que les b\u00e2timents tombent malades quand l'insuffisance d'air frais et des syst\u00e8mes de ventilation insuffisante de plomb polluants de s'accumuler\u00e0 l'int\u00e9rieur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "As illustrated by these examples, the multilingual representation helps in two important ways. First, it attempts to disambiguate the target ambiguous word by assigning it a different translation depending on the context where it occurs. For instance, the first example includes a usage for the verb build in its most frequent sense, namely that of construct (WordNet: make by combining materials and parts), and this sense is correctly translated into French as construire. In the second sentence, build is used as part of the verbal expression build support where it means to form or accumulate steadily (WordNet), and it is accurately translated in both French sentences as renforcer. For sentences three and four, build is followed by the adverb up, yet in the first case, its sense id in WordNet is 3, build or establish something abstract, while in the second one is 2, form or accumulate steadily. Being able to infer from the cooccurrence of additional words appearing the context, the MT engine differentiates the two usages in French, translating the first occurrence as acquis and the second one as accumuler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "Second, the multilingual representation also significantly enriches the feature space, by adding features drawn from multiple languages. For instance, the feature vector for the first example will not only include English features such as factory and make, but it will also include additional French features such as usine and faire. Similarly, the second example will have a feature vector including words such as buildings and systems, and also b\u00e2timents and syst\u00e8mes. While this multilingual representation can sometime result in redundancy when there is a one-to-one translation between languages, in most cases however the translations will enrich the feature space, by either indicating that two features in English share the same meaning (e.g., the words manufactory and factory will both be translated as usine in French), or by disambiguating ambiguous English features using different translations (e.g., the context word plant will be translated in French as usine or plante, depending on its meaning).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "Appending therefore multilingual features to the monolingual vector generates a more orthogonal vector space. If, previously, the different senses of build were completely dependent on their surrounding context in the source language, now they are additionally dependent on the disambiguated translation of build given its context, as well as the context itself and the translation of the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "We test our model on two publicly available word sense disambiguation datasets. Each dataset includes a number of ambiguous words. For each word, a number of sample contexts were extracted and then manually labeled with their correct sense. Therefore, both datasets follow a Zipfian distribution of senses in context, given their natural usage. Note also that senses do not cross part-of-speech boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "The TWA 2 (two-way ambiguities) dataset contains sense tagged examples for six words that have two-way ambiguities (bass, crane, motion, palm, plant, tank). These are words that have been previously used in word sense disambiguation experiments reported in (Yarowsky, 1995; Mihalcea, 2003) . Each word has approximately 100 to 200 examples extracted from the British National Corpus. Since the words included in this dataset have only two homonym senses, the classification task is easier. The second dataset is the SEMEVAL corpus 2007 (Pradhan et al., 2007) , 3 consisting of a sample of 35 nouns and 65 verbs with usage examples extracted from the Penn Treebank as well as the Brown corpus, and annotated with OntoNotes sense tags (Hovy et al., 2006) . These senses are more coarse grained when compared to the traditional sense repository encoded in the WordNet lexical database. While OntoNotes attains over 90% inter-annotator agreement, rendering it particularly useful for supervised learning approaches, WordNet is too fine grained even for human judges to agree (Hovy et al., 2006) . The number of examples available per word and per sense varies greatly; some words have as few as 50 examples, while some others can have as many as 2,000 examples. Some of these contexts are considerably longer than those appearing in TWA, containing around 200 words. For the experiments reported in this paper, given the limitations imposed by the number of contexts that can be translated by the online translation engine, 4 we randomly selected a subset of 31 nouns and verbs from this dataset.",
"cite_spans": [
{
"start": 257,
"end": 273,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF15"
},
{
"start": 274,
"end": 289,
"text": "Mihalcea, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 536,
"end": 558,
"text": "(Pradhan et al., 2007)",
"ref_id": "BIBREF13"
},
{
"start": 733,
"end": 752,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 1071,
"end": 1090,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "In order to generate a multilingual representation for the TWA and SEMEVAL datasets, we rely on the method proposed in Banea et al. (2010) and use Google Translate to transfer the data from English into several other languages and produce multilingual representations. We experiment with three languages, namely French (Fr), German (De) and Spanish (Es). Our choice is motivated by the fact that when Google made public their statistical machine translation system in 2007, these were the only languages covered by their service, and we therefore assume that the underlying statistical translation models are also the most robust. Upon translation, the data is aligned at instance level, so that the original English context is augmented with three mirroring contexts in French, German, and Spanish, respectively.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "Banea et al. (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.2"
},
{
"text": "We extract the word unigrams from each of these contexts, and then generate vectors that consist of the original English unigrams followed by the multilingual portion resulted from all possible combinations of the three languages taken 0 through 3 at a time, or more formally C(3, k), where k = 0..3 (see Figure 1 ). For instance, a vector resulting from C(3, 0) is the traditional monolingual vector, whereas a vector built from the combination C(3, 3) contains features extracted from all languages. ",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 313,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "4.2"
},
{
"text": "For weighting, we use a parametrized weighting based on a normal distribution scheme, to better leverage the multilingual features. Let us consider the following sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Weighting",
"sec_num": "4.2.1"
},
{
"text": "We made the non-slip surfaces by stippling the tops with a <head> bass </head> broom a fairly new one works best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Weighting",
"sec_num": "4.2.1"
},
{
"text": "Every instance in our datasets contains an XML-marking before and after the word to be disambiguated (also known as a headword), in order to identify it from the context. For instance, in the example above, the headword is bass. The position of this headword in the context can be considered the mean of a normal distribution. When considering a \u03c3 2 = 5, five words to the left and right of the mean are activated with a value above 10 \u22122 (see the dotted line in Figure 2 ). However, all the features are actually activated by some amount, allowing this weighting model to capture a continuous weight distribution across the entire context. In order to attain a higher level of discrepancy between the weight of consecutive words, we amplify the normal distribution curve by an empirically determined factor of 20, effectively mapping the values to an interval ranging from 0 to 4. We apply this amplified activation to every occurrence of a headword in a context. If two activation curves overlap, meaning that a given word has two possible weights, the final weight is set to the highest (generated by the closest headword in context). Similar weighting is also performed on the translated contexts, allowing for the highest weight to be attained by the headword translated into the target language, and a decrementally lower weight for its surrounding context. This method therefore allows the vector-space model to capture information pertaining to both the headword and its translations in the other languages, as well as a language dependent gradient of the neighboring context usage. While a traditional bigram or trigram model only captures an exact expression, a normal distribution based model is able to account for wild cards, and transforms the traditionally sparse feature space into one that is richer and more compact at the same time.",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 471,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Feature Weighting",
"sec_num": "4.2.1"
},
{
"text": "We encountered several technical difficulties in translating the XML-formatted datasets, which we will expand on in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adjustments",
"sec_num": "4.3"
},
{
"text": "First of all, as every instance in our datasets contains an XML-marked headword (as shown in Section 4.2.1), the tags interfere with the MT system, and we had to remove them from the context before proceeding with the translation. The difficulty came from the fact that the translated context provides no way of identifying the translation of the original headword. In order to acquire candidate translations of the English headword we query the Google Multilingual Dictionary 5 (setting the dictionary direction from English to the target language) and consider only the candidates listed under the correct part-ofspeech. We then scan the translated context for any of the occurrences mined from the dictionary, and locate the candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XML-formatting and alignment",
"sec_num": "4.3.1"
},
{
"text": "In some of the cases we also identify candidate headwords in the translated context that do not mirror the occurrence of a headword in the English context (i.e., the number of candidates is higher than the number of headwords in English). We solve this problem by relying on the assumption that there is an ideal position for a headword candidate, and this ideal position should reflect the relative position of the original headword with regard to its context. This alignment procedure is supported by the fact that the languages we use follow a somewhat similar sentence structure; given parallel paragraphs of text, these cross-lingual \"context anchors\" will lie in close vicinity. We therefore create two lists: the first list is the reference English list, and contains the indexes of the English headwords (normalized to 100); the second list contains the normalized indexes of the candidate headwords in the target language context. For each candidate headword in the target language, we calculate the shortest distance to a headword appearing in the reference English list. Once the overall shortest distance is found, both the candidate headword's index in the target language and its corresponding English headword's index are removed from their respective list. The process continues until the reference English list is empty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XML-formatting and alignment",
"sec_num": "4.3.1"
},
{
"text": "There are also cases when we are not able to identify a headword due to the fact that we are trying to find the lemma (extracted from the multilingual dictionary) in a fully inflected context, where most probably the candidate translation is inflected as well. As French, German and Spanish are all highly inflected languages, we are faced with two options: to either lemmatize the contexts in each of the languages, which requires a lemmatizer tuned for each language individually, or to stem them. We chose the latter option, and used the Lingua::Stem::Snowball, 6 which is a publicly available implementation of the Porter stemmer in multiple languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflections",
"sec_num": "4.3.2"
},
{
"text": "To summarize, all the translations are stemmed to obtain maximum coverage, and alignment is performed when the number of candidate entries found in a translated context does not match the frequency of candidate headwords in the reference English context. Also, all the contexts are processed to remove any special symbols and numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflections",
"sec_num": "4.3.2"
},
{
"text": "In order to determine the effect of the multilingual expanded feature space on word sense disambiguation, we conduct several experiments using the TWA and SEMEVAL datasets. The results are shown in Tables 1 and 2. Our proposed model relies on a multilingual vector space, where each individual feature is weighted using a scheme based on a modified normal distribution (Section 4.2.1). As eight possible combinations are available when selecting one main language (English) and combinations of three additional languages taken 0 through 3 at a time (Spanish, French and German), we train eight Na\u00efve Bayes learners 7 on the resulted datasets: one monolingual (En), three bilingual (En-De, En-Fr, En-Es), three tri-lingual (En-De-Es, En-De-Fr, En-Fr-Es), and one quadri-lingual (En-Fr-De-Es). Each dataset is evaluated using ten fold cross-validation; the resulting micro-accuracy measures are averaged across each of the language groupings and they appear in Tables 1 and 2 in ND-L1 (column 4) , ND-L2 (column 5), ND-L3 (column 6), and ND-L4 (column 7), respectively. Our hypothesis is that as more languages are added to the mix (and therefore the number of features increases), the learner will be able to distinguish better between the various senses.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 214,
"text": "Tables 1 and 2.",
"ref_id": null
},
{
"start": 960,
"end": 994,
"text": "Tables 1 and 2 in ND-L1 (column 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Our baseline consists of the predictions made by a majority class learner, which labels all examples with the predominant sense encountered in the training data. 8 Note that the most frequent sense baseline is often times difficult to surpass because many of the words exhibit a disproportionate usage of their main sense (i.e., higher than 90%), such as the noun bass or the verb approve. Despite the fact that the majority vote learner provides us with a supervised baseline, it does not take into consideration actual features pertaining to the instances. We therefore introduce a second, more informed baseline that relies on binary-weighted features extracted from the English view of the datasets and we train a multinomial Na\u00efve Bayes learner on this data. For every word included in our datasets, the binary-weighted Na\u00efve Bayes learner achieves the same or higher accuracy as the most frequent sense baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "Comparing the accuracies obtained when training on the monolingual data, the binary weighted baseline surpasses the normal distribution-based weighting model in only three out of six cases on the TWA dataset (difference ranging from .5% to 4.81%), and in 6 out of 31 cases on the SEMEVAL dataset (difference ranging from .53% to 7.57%, where for 5 of the words, the difference is lower than 3%). The normal distribution-based model is thus able to activate regions around a particular headword, and not an entire context, ensuring more accurate sense boundaries, and allowing this behavior to be expressed in multilingual vector spaces as well (as seen in columns 7-9 in Tables 1 and 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 671,
"end": 686,
"text": "Tables 1 and 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.3"
},
{
"text": "When comparing the normal distribution-based model using one language versus more languages, 5 out of 6 words in TWA score highest when the expanded feature space includes all languages, and one scores highest for combinations of 3 languages (only .17% higher than the accuracy obtained for all languages). We notice the same behavior in the SEMEVAL dataset, where 18 of the words exhibit their highest accuracy when all four languages are taken into consideration, and 3 achieve the highest score for three-language groupings (at most .37% higher than the accuracy obtained for the four language grouping). While the model displays a steady improvement as more languages are added to the mix, four of the SEMEVAL words are unable to benefit from this expansion, namely the verbs buy (-0.61%), care (-1.45%), feel (-0.29%) and propose (-2.94%). Even so, we are able to achieve error rate reductions ranging from 6.52% to 63.41% for TWA, and from 3.33% to 34.62% for SEMEVAL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.3"
},
{
"text": "To summarize the performance of the model based on the expanded feature set and the proposed baselines, we aggregate all the accuracies from Tables 1 and 2, and present the results obtained in Table 3 . The monolingual modified normal-distribution model is able to exceed the most common sense baseline and the binary-weighted Na\u00efve Bayes learner for both datasets, proving its superiority as compared to a purely binary-weighted model. Furthermore, we notice a consistent increase in accuracy as more languages are added to the vector space, displaying an average increment of 1.7% at every step for TWA, and 0.67% for SEMEVAL. The highest accuracy is achieved when all languages are taken into consideration: 86.02% for TWA and 83.36% for SEMEVAL, corresponding to an error reduction of 25.96% and 10.58%, respectively. Table 1 : Accuracies obtained on the TWA dataset; Columns: 1 -words contained in the corpus, 2 -number of examples for a given word, 3 -number of senses covered by the examples, 4 -micro-accuracy obtained when using the most common sense (MCS), 5 -micro-accuracy obtained using the multinomial Na\u00efve Bayes classifier on binary weighted monolingual features in English, 6 -9 -average micro-accuracy computed over all possible combinations of English and 3 languages taken 0 through 3 at a time, resulted from features weighted following a modified normal distribution with \u03c3 2 = 5 and an amplification factor of 20 using a multinomial Na\u00efve Bayes learner, where 6 -one language, 7 -2 languages, 8 -3 languages, 9 -4 languages, 10 -error reduction calculated between ND-L1 (6) and ND-L4 96 Conclusion",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 3",
"ref_id": null
},
{
"start": 822,
"end": 829,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.3"
},
{
"text": "This paper explored the cumulative ability of features originating from multiple languages to improve on the monolingual word sense disambiguation task. We showed that a multilingual model is suited to better leverage two aspects of the semantics of text by using a machine translation engine. First, the various senses of a target word may be translated into other languages by using different words, which constitute unique, yet highly salient features that effectively expand the target word's space. Second, the translated context words themselves embed co-occurrence information that a translation engine gathers from very large parallel corpora. This information is infused in the model and allows for thematic spaces to emerge, where features from multiple languages can be grouped together based on their semantics, leading to a more effective context representation for word sense disambiguation. The average microaccuracy results showed a steadily increasing progression as more languages are added to the vector space. Using two standard word sense disambiguation datasets, we showed that a classifier based on a multilingual representation can lead to an error reduction ranging from 10.58% (SEMEVAL) to 25.96% (TWA) as compared to the monolingual classifier. Table 2 : Accuracies obtained on the SEMEVAL dataset; Columns: 1 -words contained in the corpus, 2 -number of examples for a given word, 3 -number of senses covered by the examples, 4 -micro-accuracy obtained when using the most common sense (MCS), 5 -micro-accuracy obtained using the multinomial Na\u00efve Bayes classifier on binary weighted monolingual features in English, 6 -9 -average micro-accuracy computed over all possible combinations of English and 3 languages taken 0 through 3 at a time, resulted from features weighted following a modified normal distribution with \u03c3 2 = 5 and an amplification factor of 20 using a multinomial Na\u00efve Bayes learner, where 6 -one language, 7 -2 languages, 8 -3 languages, 9 -4 languages, 10 -error reduction calculated between ND-L1 (6) and ND-L4 (9) Table 3 : Aggregate accuracies obtained on the TWA and SEMEVAL datasets; Columns: 1 -dataset, 2 -average micro-accuracy obtained when using the most common sense (MCS), 3 -average micro-accuracy obtained using the multinomial Na\u00efve Bayes classifier on binary weighted monolingual features in English, 4 -7 -average microaccuracy computed over all possible combinations of English and 3 languages taken 0 through 3 at a time, resulted from features weighted following a modified normal distribution with \u03c3 2 = 5 and an amplification factor of 20 using a multinomial Na\u00efve Bayes learner, where 4 -one language, 5 -2 languages, 6 -3 languages, 7 -4 languages, 8 -error reduction calculated between ND-L1 (4) and ND-L4 (7)",
"cite_spans": [],
"ref_spans": [
{
"start": 1272,
"end": 1279,
"text": "Table 2",
"ref_id": null
},
{
"start": 2065,
"end": 2072,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.3"
},
{
"text": "The sentences provided and their annotations are extracted from the SEMEVAL corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cse.unt.edu/\u02dcrada/downloads.html\\#twa",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.cs.swarthmore.edu/semeval/tasks/task17/description.shtml4 We use Google Translate (http://translate.google.com/), which has a limitation of 1,000 translations per day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.google.com/dictionary 6 http://search.cpan.org/dist/Lingua-Stem-Snowball/lib/Lingua/Stem/Snowball.pm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the multinomial Na\u00efve Bayes implementation provided by the Weka machine learning software(Hall et al., 2009). 8 Our baseline it is not the same as the traditional most common sense baseline that uses WordNet's first sense heuristic, because our data sets are not annotated with WordNet senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based in part upon work supported by the National Science Foundation CAREER award #0747340 and IIS award #1018613. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multilingual subjectivity: Are more languages better?",
"authors": [
{
"first": "C",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "28--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Banea, C., R. Mihalcea, and J. Wiebe (2010, August). Multilingual subjectivity: Are more languages better? In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), Beijing, China, pp. 28-36.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word sense disambiguation using a second language monolingual corpus",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Itai",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "563--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, I. and A. Itai (1994). Word sense disambiguation using a second language monolingual corpus. Computational Linguistics 20(4), 563-596.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An unsupervised method for word sense tagging using parallel corpora",
"authors": [
{
"first": "M",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diab, M. and P. Resnik (2002, July). An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the 40st Annual Meeting of the Association for Computational Linguistics (ACL 2002), Philadelphia, PA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "WordNet, An Electronic Lexical Database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, C. (1998). WordNet, An Electronic Lexical Database. The MIT Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The weka data mining software: An update",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "SIGKDD Explorations",
"volume": "11",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hall, M., E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten (2009). The weka data mining software: An update. SIGKDD Explorations 11(1).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ontonotes: the 90In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers on XX, NAACL '06",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy, E., M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel (2006). Ontonotes: the 90In Proceed- ings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers on XX, NAACL '06, Morristown, NJ, USA, pp. 57-60. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lesk",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the SIGDOC Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesk, M. (1986, June). Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the SIGDOC Conference 1986, Toronto.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word translation disambiguation using bilingual bootstrapping",
"authors": [
{
"first": "C",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, C. and H. Li (2002). Word translation disambiguation using bilingual bootstrapping. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Pennsylvania.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The role of non-ambiguous words in natural language disambiguation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the conference on Recent Advances in Natural Language Processing RANLP-2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, R. (2003, September). The role of non-ambiguous words in natural language disambiguation. In Proceedings of the conference on Recent Advances in Natural Language Processing RANLP-2003, Borovetz, Bulgaria.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2010 task 2: Cross-lingual lexical substitution",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, R., R. Sinha, and D. McCarthy (2010). Semeval-2010 task 2: Cross-lingual lexical substitu- tion. In Proceedings of the ACL Workshop on Semantic Evaluations, Uppsala, Sweden.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "PageRank on semantic networks, with application to word sense disambiguation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tarau",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Figa",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20st International Conference on Computational Linguistics (COLING 2004)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, R., P. Tarau, and E. Figa (2004). PageRank on semantic networks, with application to word sense disambiguation. In Proceedings of the 20st International Conference on Computational Lin- guistics (COLING 2004), Geneva, Switzerland.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Integrating multiple knowledge sources to disambiguate word sense: An examplar-based approach",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL 1996)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng, H. and H. Lee (1996). Integrating multiple knowledge sources to disambiguate word sense: An examplar-based approach. In Proceedings of the 34th Annual Meeting of the Association for Compu- tational Linguistics (ACL 1996), Santa Cruz.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exploiting parallel texts for word sense disambiguation: An empirical study",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng, H., B. Wang, and Y. Chan (2003, July). Exploiting parallel texts for word sense disambiguation: An empirical study. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003), Sapporo, Japan.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semeval-2007 task-17: English lexical sample, srl and all words",
"authors": [
{
"first": "S",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Dligach",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradhan, S., E. Loper, D. Dligach, and M. Palmer (2007, June). Semeval-2007 task-17: English lex- ical sample, srl and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distinguishing systems and distinguishing senses: new evaluation methods for word sense disambiguation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural Language Engineering",
"volume": "5",
"issue": "2",
"pages": "113--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. and D. Yarowsky (1999). Distinguishing systems and distinguishing senses: new evaluation methods for word sense disambiguation. Natural Language Engineering 5(2), 113-134.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL 1995)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarowsky, D. (1995, June). Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL 1995), Cambridge, MA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Construction of a multilingual vector (combinations of target languages C(3, k), where k = 0..3"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Example of sentence whose words are weighted based on a normal distribution with variance of 5, and an amplification factor of 20"
}
}
}
}